Archive for the ‘Tech Ed-6209’ Category

EME 6209 – Stop Motion

Saturday, June 1st, 2019

So we now introduce you to yet another term: stop motion. For a complete history you can always click the wikipedia link provided here. No sense in us reinventing the wheel so-to-speak. There are plenty of other places to help you dig a little deeper into this concept:

  • Dragonframe, in its attempt to sell you screen grabbing software (30 day free trial available) provides an ample explanation.
  • Webopedia offers an explanation with great glossary.
  • Techopedia (while having to suffer through a few ads) does a nice job also with related terms
  • Best yet is Lynda.com… again trying to sell subscriptions to is classes… contains a great video demonstration.
So What Does this Have to Do With This Course?

We are looking at stop motion as an introduction to time-based video.

It is interesting to note that South Park, one of the most successful animated TV shows of all time started out as a stop action movie.

But please don’t overlook the fact that audio is also time based. Harder to visualize for sure (unless you count the animated music score from previous readings). We will be getting into video editing next but we are simply telling you that you do not have to do actual video to implement time-based media into your lessons. Animated motion, stop motion flip books.. all of them add interest to your projects. The intent is to show you some easy ways to get started… hopefully, some of it will translate /transfer into the actual video editing lessons as we dig deeper into this wonderful world of time-based media.

After Completing this set of Readings You are Expected to Do the Following
dothis

To complete this lesson, you MAY utilize your video editor or spend some money on some specific stop motion products. We introduce you to both to give you the opportunity for the best experiences. We are not endorsing any of these products but have done some research and found the ones listed to offer the best experiences. You also need to review the module on kinestasis , which could be fun final artifact that you produce for this activity. In order to get the pictures to move according to the sound track the trick is to lay your sound track down first then editing the length of the pictures to match the sound. Kinestasis has its own metric. Each picture needs to be on the timeline for no more that 1/3 a second a piece (at 30 frames a second this is no more than 10 frames each).

Here are the specs:

  1. Create, as a minimum, a thirty to sixty-second stop motion artifact… the content/context to be of your choosing. It may be a flip book, an animated Gif, kinestasis, or something created using one of the paid products noted below.
  2. Save the file as an mp4 file and upload it using the File Uploader Function we have provided. This page is password protected (123456).
  3. Also name your file last_name-stopmotion.mp4.
  4. Once your file is successfully uploaded you will receive an upload successful message> Post your confirmation as an answer to the one question survey in Canvas.

Here are some standalone Stop Motion Products (Optional)

  • Disney Storymation Studio by Wonderforge. (iPad/iPhone) The app is free but limited. You can also purchase a self-contained kit for around $30. Provides a lot of the content as well as technology to produce your stop motion.
  • Stikbots (around $25) is actually a toy for which you can interface using an app on your tablet to create stop action movies. Also comes with its own You Tube channel.
  • The App store and Play store offer several stop action apps to help you create stop action movies on your mobile devices. Prices vary from free to inexpensive.
  • You can also find several screen grab/capture products for your MAC and for Windows.. again from free to inexpensive. What these do is allow you to capture video game play so you can create your own stop motion of a video game play… this is often referred to as machinima.
  • Last, there are several free and inexpensive animated gif creation products out there… gifmaker being one of them.


stopmotion

EME 6209 – Video File Formats/Compression

Tuesday, June 27th, 2017

Publishing/Rendering your Videos (Intro to File Formats)

Once you have created your videos, you probably will want to post them somewhere… YouTube and Vine are two choices. If you simply want to use them on your own computer in your classroom, you need to understand file formats. Here are a few pointers to get you started:

  • Uncompressed videos are usually too large to play on computers. An uncompressed video can take up as much as 2 gigabytes per minute. Making it impractical to handle them on most computers purchased for the classroom. Also, you need to know that formatting the video is different if you want to play the video on a television set or on a computer. The file formats are not the same. All editing packages attempt to make this process easier and use menus to choose which format you want. In some cases, if the video was produced properly, it can be played on a television and a computer… that is, if the computer has a good dvd player (like the Macs).
  • Both Microsoft and Apple have their own proprietary output formats. For Windows, it is .wmv files. For Macs, it is usually .mov or .mp4 files. Microsoft wants you to use its Media Player to view the videos, Apple wants you to use QuickTime. Not all computers configured for the classroom have either or both of these players on them. While at home you can simply download the players (they are both free). But at school you may have to get permission to do this or have your tech coordinator do it for you… (this is another story that we could do a whole module on, but I won’t go there).
  • Not all.wmv or .mov files are created equal. We will cover in the file format lesson in more detail, but for now, understand this is is based on the compression routines used to compress and decompress the files to make them smaller. Often you will need to find a plug-in for the video player that recognizes that Codec. There are attempts being made to standardize all f this but it take up to seven years of testing to come to terms with each version of the standard. We are up to version seven now but as of now, only versions 1-4 have been stabilized… So, we are still looking at file incompatibilities..
  • In case you are lost on all of this, simply know this… that hen you do video editing, the raw form of the video you are working on is called the project file. All of the subsidiary files and images, and audio are located in their original resting place. (Just like with PowerPoint for example, the application “points” to the original file but does not embed the actual video into the ppt file)… So, what you have to do to complete the project for viewing is to ‘render’ the project to its final compressed version (either .wmv , .mov etc). With this process the project is compressed and all subsidiary, linked files are actually embedded into one ‘flat’ file. When you turn in your completed project, I am going to be asking for the finalized file and NOT the un-rendered project file…
  • So when creating your projects make sure you ‘finish the job’ by rendering it according to the specifications…

File Management

File management remains as one of the most confusing issues that face those involved with multimedia production. How often has is happened that you save a video, for example, as .mov file thinking it will be playable on all media players that accept .mov files only to find out that it does not play on someone else’s machine? The guiding rule is that not all formats are created equal. This is because of the CODEC (compression-decompression routine). In order fo you to become familiar with all of this, take a look at the following links. Much of it repeats itself but each one present a different ‘wrinkle’ on the subject. Afterwards, I present some notes and a vieo I made for another production course I taught (which explains the lesson 9B designation). Rather than recreate it I decided to leave it the way is is because I think it presents a pretty good, relatively quick overview of how mpg processing works. It may be a a lot more about this subject than what you will need but after you absorb the content, you will have a good consolidated overview of the topic of video compression.

Compression Theory

This section. is a set of notes that explains the motivation and conceptual theory behind video compression.

Motivation

The need to compress video can be boiled down into to three specifics:

  • Uncompressed video and audio data are huge. In HDTV, the bit rate easily exceeds 1 Gigabyte per second. This translates into big problems for storage and network communications:
    • For uncompressed captured video into digital format, sizes are also big.. 1 second of captured video can equal 27 megabytes (mb)
    • The compression ratio of the so-called ‘lossless’ methods (e.g., Huffman, Arithmetic, LZW, etc.) is not high enough for image and video compression, due to:
  • bandwidth constraints
    • the distribution of pixel values is relatively flat.. this is because most areas of a video are different than others.. the conceptual design of compression is that you only hav to send actual pixel values if they change..
  • humans tolerate a certain amount of loss (for example humans do not notice subtle changes in color tones), which can be predicted before it is noticed.

Video compression is merely compacting the digitize video into a smaller space.

Choices of Codecs

Current Standards… yes the industry is attempting to standardize

  • Motion – JPEG
  • MPEG I
  • MPEG II (even though there exists more recent formats, this is the latest ‘standardized’ format)

Products (the following are CODECs, NOT software applications) (and most of them you probably haven’t even heard of):

  • Captain Crunch Media Vision
  • Cinepak
  • DVI-RTL (2)
  • Indeo
  • Pro-Frac
  • SoftVideo
  • Ultimotion
  • Video Apple
  • Video Cube
  • Sorenson
  • MotiVE Media Vision
  • h.264

On the other hand, QuickTime is a software application that uses one or more of the above CODECs (currently, h.264.. Like everyone else, Apple is developing its own set of CODEC in hope that it will become standard)

Techniques/Strategies

  • Source Coding = encoding the output of an information source into a format that can be transmitted digitally to a receiver as a series of code words such that the average length of the code words is as small as possible.
  • Data Compression = reduction in redundancy …made possible and made necessary by digital transformation (this is what, in effect, jpeg does).

Digitizing, by itself can add to file sizes:

  • a 4MHz channel (that is the space allocated for each television channel) is capable of transmitting 8 million analog samples of a picture per second.The size of a digital representation of same analog image is increased 8 times. (based on one bit representation)

Two alternative strategies have been developed

1- One approach is to throw away “unnecessary” information from individual frames (called intra-frame editing). This is known as ‘lossy’ compression

This can be done at the camera level…

For example, the Digital Betacam reduces data by half (2:1 ratio). A DV camera compresses at 5:1.Produces clean low-noise signal where every frame is the original signal and is stored on tape or disk so we can easily isolate frames when needed to edit the material.

The above is an example of Motion- Jpeg compression.. (very similar technique as with still photos)

2- Then we have Mpeg (motion jpeg)… this compression method looks at adjacent frames to see which pixels are changing (within each frame)… while visual data may be lost, its advantage is that it is also designed to synchronize with the audio much better.

Mpeg has evolved into Mpg and is becoming the ‘defacto’ standard compression CODEC (view the video below for more details…)

Required compression ratios for package television via commercial channels.

Look at the chart below. For each of the channel types, the chart shows the required compression ratio needed for that channel to properly handle the selected service at a lossless level. For example, for a PCLan to service an HDTV broadcast, a 31,000:1 compression ratio would have to be attained, film quality video would require a compression of 76,000:1, etc. For purposes of this example, we considered cable modems and DSL (i.e., Centrurylink) to be ‘virtually’ equal at the top end, when, in fact, there might be differences… this chart is for illustrative purposes only….
[one_third].[/one_third][one_third]

NTSC TV HDTV FilmQuality
Channel
Bit Rate
168 Mb/s
933Mb/s
2300 Mb/s
PC Local Lan
30 kb/s
5,600:1
31,000:1
76,000:1
Modems
56 kb/s
3,000:1
17,000:1
41,000:1
ISDN
64-144 kb/s
1,166:1
6,400:1
16,000:1
Cable Modems
10-20 Mb/s
30:1
150:1
200:1
Electrical Outlets
varies
varies
varies
varies
T-1/DSL
1.5-10 Mb/s
112:1
622:1
230:1
T-3
42Mb/s
17:1
93:1
54:1
Fiber Optic
200 Mb/s
1:1
5:1
11:1

[/one_third][one_third_last].[/one_third_last]

How to decide which video CODEC is necessary…

You first need to define the Distribution Medium and Playback Platform:

  • Know the CODEC’s availability on the multiple platforms you plan to distribute your software.
  • Know the CODEC’s ability to adapt the synchronized playback speed to the available hardware without user interference.
  • Weigh the developer issues (is a slower compression ok?)
  • Know the source of the video and whether it has previously been compressed. Yes, compressing an already compressed video can result in a LARGER video!)
  • Know the type of video you are producing.. how much motion, color?, image size, how much activity?, sound? camera moves?

Three criteria involved in selecting a specific CODEC

  1. the Compression Level,
  2. the Quality of the Compressed Video
  3. the Compression/Decompression Speed

Technical discussion: There are two techniques you can use.. intra-frame (within) and inter-frame (between).

How the two alternative compression approaches work (understanding the concepts below is key to your understanding of how compression works in general):

  • Intra-frame Compression

– takes advantage of redundancy with picture (for example a picture of a sky has a lot of blue in it)

– takes advantage of human limitations (humans notice change in luminance ten times more readily than changes in color).

  • Inter-frame Compression

– takes advantage of redundancy in sequence of pictures at 30 frames per second, each subsequent frame is going to appear a lot like the previous one..

– also uses intra-frame techniques

Two important Compression Terms:

  1. Lossless = allowing exact recovery of the source data
  2. Entropy.. which is the smallest average code word length (i.e., = smallest predictable size) without substantially changing the context of what is being shown… (visual language of the moving image…)

Notes/Review:

Entropy is a measure of the smallest average information content per source output unit (i.e. when bits/pixel = 1:1 ratio)

– In order to accomplish an equal broadcast quality level (based on how analog samples are transmitted), it takes a 4:1 compression to transmit a monochrome digital signa.

– Color requires an additional 50-200%

Theoretical Review

Compression Principles

1. Data redundancy – sample values are not entirely independent… neighboring values are somehow correlated (both audio & video).

Because of this, certain amount of predictive coding can be applied.

2. Voice: there is a lot of dead space (silence removal).

3. Images: neighboring samples are normally similar (spatial redundancies removed thru transform coding)

4. Video: the sequence of images are normally similar

Two basic methods:

  • Lossless – preserves all data, but is inefficient.
  • Lossy – some data is eliminated. But as most images contain more data than what the eye or ear can discern, this can be unnoticeable. However, as file sizes get smaller, loss can be detected. Lossy is better suited than lossless for delivery on movable storage and over networks.

Methodology:

  • Spatial – applied to a single frame, independently of any surrounding frames. (intra frame) (jpeg)
  • Temporal – identifies differences between frames and stores only those differences (inter frame). Also uses intraframe technology to establish keyframes.
  • Keyframe – reference frame for the intra-frames that follow. (Most editors’ “scrub” controls can only jump to keyframes). Note: if you increase the number of keyframes = increased files sizes

Factors to be considered:

Handling color

Compression is handled through space reduction (two factors.. humans perceive each of these differently):

– Luminance (brightness)

– Color (chrominance)

These are accounted for separately because the human eye notices differences in luminance at a rate almost 10 tens higher than changes in color. (For 16 bit color, the image is divided into 16×16 blocks, for 32 bit 32×32, etc.). That is why you see component HD cables on the back of your TV set (red, blue & green and one additional cable for luminance)

Terms

Time to encode

Symmetric = same amount of time to encode and decode

Asymmetric = encoding is not done in real time. (based on frame rate, size, and data rate of video)

Factors in determining compression time are:

  • Frame size
  • frame rate
  • encode rate (Variable bit rate takes longer)

Data Rate

– Should be maximized for the targeted delivery channel.

  • CD-ROM = 200kb/sec
  • Internet = 1.5 to 50 kb/sec
  • Higher Data Rate = higher quality (eg. formula = HxWxFPS/48000) ( sb between ½ and double result above)

– Also affected by amount of action within the frame. –Trick is to reach this ceiling limit with a lower rate so compression is more efficient

Contrast Sensitivity is handled through space reduction.

–Luminance (brightness)

–Color (chrominance)

Humans tend to notice differences in luminance more readily than they do chroma..

Humans tolerate more loss with color than with monochrome.

–The eye posses a lower sensitivity to high and low spatial frequencies than mid-frequencies.

Implication is that perfect fidelity of edge contours can be sacrificed to some extent (these are high spatial frequency brightness transitions).

Humans can detect differences in luminance intensity levels down to approx 1% of peak white w/in a scene. (= 100:1 contrast ratio)

–Therefore the math behind compression does not have to be linear.

–Also affected by viewing ratio. (how far away from the screen the viewer normally sits)

Delivery Mechanism

What is your viewing audience going to use to playback the video?

–CD-ROM?

–Internet/Intranet?

–Live?

Power/performance of playback machine

  • Lower end machines cannot handle higher data rates.
  • Factors are frame rate, data rate and frame size

Summary: Choosing a CODEC

General considerations

  • Method used for delivery
  • Audience’s configuration
  • Data Rate – should be maximized for the targeted delivery channel.

Also affected by amount of action within the frame. The trick is to reach this ceiling limit with a lower rate so compression is more efficient.

Also need to take into consideration power/performance of playback machine.. Lower end machines cannot handle higher data rates.

In summary: Factors are frame rate, data rate and frame size, #keyframes

PLUS:

Delivery mechanism – what is your viewing audience going to use to playback the video?

Performance Measurements:

Compression Ratio:

1.  Ratio between the original data and data after compression.

–A higher ratios not always desired… it depends on quality of reconstructed data.

2.  Compression speed and complexity are also considerations.. Making asymmetric CODECs

– More desirable in some cases (live feed vs QuickTime (archived video it does not matter/interfere with viewing)

Summary of requirements for Playback/File-size

  • Data rate – needs to be 70 Kilobytes per second or less for lower end machines
  • Frame size – 320×240 or less recommended
  • Frame rate – 15 frames per second.. less for low-end/slow machines
  • Doubling – ~full screen requires larger file size
  • number of Keyframes ( we will cover this later)
  • CPU Alternatives – allows you to produce several versions at different rates
  • Playback scalability – drops every other frame
  • Number of Transitions
  • Amount of action w/in a frame
  • z-axis (look this term up if you do not know it

Methods can also vary by type of video

For Training videos (where there is usually more action than two talking heads)

  • Usually compress very well at lower data rates because of lower am’t of action.
  • Can compress at higher data rates for CD-ROM and broadband

The Video Lesson Begins here.. we will now look at mpeg coding

The following are notes extracted from the video for your review

The theory behind Mpeg:

In order to understand the concepts discussed in this video, you need to understand this one underlying principle:

Compression yields even more compression due to various redundancy/predictive methods

Standards?

While the codec products we have been describing (Cinepak, Indeo, Sorensen) are widely available, none of them are any more than ‘de-facto’ standards that were developed by private companies in hopes that they would become widely used.

Several international standards for compressed digital video are grouped together under the name MPEG were developed by committees of experts (all of whom worked for these private comapnies and participated in order to 'lobby' for their specs to be included).

MPEG Choices

  • Motion -JPEG
  • MPEG-I (VHS to CD)
  • MPEG-2 (Digital Video) (DVD)
  • MPEG-3 (HDTV)
  • MPEG-4 (included more audio) (latest approved standard)
  • MPEG-5 better compression

We are up to MPEG-7 (interactivity)…

Only standards ever issued to date are 1,2, & 4 (it takes seven years to formalize a standard.. that is why even all mpg 5 videos are not created equally.. developers use the guidelines that are issued the play around to tweak them...

Uses prediction to exploit potential redundancy to yield statistical entropy

  1. Separate the pixel values into a limited number of data clusters (e.g., pixels whose color is clustered near sky blue or grass green or flesh tone or the color of clothing in the image, etc.).
  2. Send the average color of each cluster and an identifying number for each cluster as (29)side information.
  3. Transmit, for each pixel: (this is where prediction comes into play)

The number of the average cluster color that it is close to, and Its difference from that average cluster color.

This method will yield approx 2:1 lossless reduction

Simple Differential Predictive Coding

Two values are sent:

  • Predicted pixel value (the value of the preceding pixel).. prediction assumes nothing is changed unless otherwise indicated.
  • Prediction error (difference between the predicted pixel and the actual pixel)

Because these are sent as entropy codes, there is reduction… recall, these codes do not have the 4:1 requirement, as they are sent as side information.. side information is not a graphical format, but, rather, information such as pointers or algorithms that aide in the reconstruction of the image during decompression.. therefore, cutting done on bandwidth (i.e., file size) requirements.

Uses Frame Differential Coding

  • Prediction from a previous video frame
  • Requires storage of a frame of video in the encoder for comparison
  • Good for still images

Motion compensated prediction

notice that the word 'prediction' is often used here?

  • Compares the present pixel to the location of the same object in the previous frame
  • Estimates the motion to make the prediction
  • Sends motion vector and prediction error as (33)side information.

MPEG Compression Technology

Three types of frames: I, P, and B

– I-frames are intra-frame encoded – P-frames use forward prediction from I or P frames

– B-frames can use forward and backward prediction from I or P frames

Inter-frame Techniques

Simplest

– treats each image independently

Differences

- View each frame as an adaptation of previous frame – store changes in color at each pixel – the number of changes can be large even if the changes themselves aren't large.

Motion Compensation.

- Indicate motion of camera – store error pixels – This technique won't compensate for characters moving within a scene.

Block Motion Compensation.

- Break scene into blocks – indicate motion of each block from last scene

Techniques used when a scene changes

- Analyze newly uncovered information due to object motion across a background, or at the edges of a panned scene.

To handle this, MPEG uses I-frames for start-up frame (sent 2x /second)

Sends B-frame to reduce the data required to send uncovered information

The frame order is changed to accomplish this:

Source order and encoder input order:

I(1) B(2) B(3) P(4) B(5) B(6) P(7) B(8) B(9) P(10) B(11) B(12) I(13) Encoding order and order in the coded bit stream:

I(1) P(4) B(2) B(3) P(7) B(5) B(6) P(10) B(8) B(9) I(13) B(11) B(12) Decoder output order and display order (same as input):

I(1) B(2) B(3) P(4) B(5) B(6) P(7) B(8) B(9) P(10) B(11) B(12) I(13)

Intra-frame Techniques

The math is called:

Discrete Cosine Transform (DCT coefficients)

- Converts picture into frequency space

- Can judge which information is important .. is run to organize the redundancy in the spatial directions then Huffman coded (i.e. duplicates are removed) (often results in lossy compressions because some of the calculations are estimated

Block motion vectors are Huffman encoded (another math coefficient)

B-frames are most useful for improved signal

In other words,  for within frame compression, the MPEG CODEC looks for a close match to that block in a previous or future frame (there are backward prediction modes where later frames are sent first to allow interpolating between frames).

The DCT coefficients (of either the actual data, or the difference between this block and the close match) are quantized and sub-sampled, which means that you divide them by some value to drop bits off the bottom end. This is possible because a human's capacity to see things is limited.

MPEG advantages

  • Has become an International standard
  • Allows Inter-frame comparisons
  • Predicts redundant traits
  • More universal

Benefits of MPEG (the last 'standard' version is MPEG-4 (.mp4)

  • MPEG-I could achieve 30 fps on 320 X 240 windows when played back on boards that cost less than $500.
  • The key question was whether developers would move to MPEG I or stick with existing software. Most people agree that MPEG-I playback looked better and it added the advantage of compressing audio.

In summary:

  • JPEG and MPEG are not products… only standardization techniques… JPEG is symmetrical...
  • QuickTime is a product that incorporates standardization techniques into it.
Review!

While there is really no assignment attached to this lesson, I have a series of review questions you can utilize to test your knowledge of the content in this lesson.

EME 6209 – Introduction to Time Based Media

Thursday, June 1st, 2017

Where do we go from here?
There is no assignment at this time. After you complete these readings and explore time based media, review the lesson on and do the activities associated with stop motion.

To many, time-based media is simply applying a time line to still imagery. This could not be further from the truth.  While there is some truth to the fact that a time line makes the images move forward (and is actually the original intent of what used to be referred to as “motion pictures”), a whole new sub-culture of psychological and cognitive research has arisen to study the impact ‘moving images’ has on communicating and learning.

My first comment to you is that if a picture is worth a thousand words, then the value of moving pictures is exponentially more.

Moving pictures has a rich history of inventions and mechanized devices that were invented simply to be able to present viewers with pictures that move.

Flip Books

Wikipedia has a pretty good introduction to the flip book, one of the first non-mechanical attempts to bring moving images to life:’http://en.wikipedia.org/wiki/Flip_book. These were the first ideas folks had when dreaming up the idea of cartoons.

  • Here is an example of a flip book: a fun and exemplary video:[one_third]…[/one_third][one_third]
    [/one_third][one_third_last]…[/one_third_last]

We have assembled some interesting site for you to visit to explore this interesting world of the history of film and video:

Kinetoscope

One of the earliest mechanized ‘projectors’ was the kinetoscope. Again, Wikipedia comes to the rescue to introduce us:

http://en.wikipedia.org/wiki/Kinetoscope

Optical Toys

Here is a site specifically created to introduce you to some of the more interesting ‘toys’ that have been created over the years:

The toys found on the link below are the genesis of motion pictures. They are, in effect, single frame animations like a modern movie cartoon. A series of still images, each showing a slightly different phase of a movement or two images to be combined are presented to us in rapid succession with some kind of “shutter” effect between the images. The “shutter” can be a slot in a drum, a mirror surface, or images on different pages or sides of a the moving object.

Jack and Beverly’s Optical Toys:    http://brightbytes.com/collection/toys.html

Devices of Wonder Website:

http://www.getty.edu/art/exhibitions/devices/choice.html

Zoetrope/Kinestasis

Wikipedia:http://en.wikipedia.org/wiki/Zoetrope

Ok, ok, we now go back in time to the zoetrope. I do this because the zoetrope is the precursor to the concept of ‘kinestasis‘ using the psychological factor of persistence of vision that produces an illusion of action from a rapid succession of static pictures. Kinestasis evolved into a whole new sub-culture of film making, first made famous by Chuck Braverman, who introduced America to the concept with his famous “American Time Capsule” presenting the history of the United States in 4 minutes (up to that time) that played on the Smothers Brothers Comedy Hour back in the early 1970s. Played to a single drum beat, this work creates an intriguing view of America that epitomizes the the anti-war sentiment during a tumultuous time in our recent history:

[one_third]…[/one_third][one_third]

[/one_third][one_third_last]…[/one_third_last]

Braverman’s work inspired many others to follow. In particular is Jeff Schur who made several videos using the kinestasis effect. Note that the pictures do not even have to be related to create the sense of movement. This video is merely a collection of miscellaneous images that xxx collected throughout his life and which he imposed in the background to create a wonderful a story about his life. This video is called interestingly enough, “Milk of Amnesia”

[one_third]…[/one_third][one_third]

[/one_third]…[one_third_last]…[/one_third_last]

Please pay attention to this information of kinestasis filming. Your first video project will be to create one of your on using photos/clip art you create in Photoshop.

Not to discourage you or anything but I wanted to show you an amazing video created by one of my former students. This was his first video project ever. I had asked my students to produce a video of current events. The video needed to contain at least 30 seconds of kinestasis. This student took it to heart and did a powerful production about the 9/11 attack. Note that every one of the newspaper headlines was from a totally different newspaper!

[one_third]…[/one_third][one_third]

[/one_third][one_third_last]…[/one_third_last]

Digging Deeper

  • Time Based Media Resource Site
  • Frames per second: How fast/slow does the framing have to proceed in order to appear like the still images are ‘moving’? (hint: anything over 15 frames per second seems to work.. but 24 (for color slides) is the stated minimum)

Animation Fun

Animation has evolved into much more than simple Flip books.. it can be interactive. Here is a short video found on You Tube that demonstrates an animated music score:

Segue: Importance of Storyboards

So where is all of this leading to? You guessed it.. storyboards! Ok, ok this is giant leap. But seriously, one of the things you need to be aware of is that you cannot create a good video or animation without designing them. Many students tend to treat this part of the assignment as something they do not like to do and complete the storyboard AFTER the video is completed. Kind of like doing your outline after you’ve written your term paper.. or as they say in the military… READY-SHOOT-AIM

In other words, you can do a great project by either getting lucky or having a plan. Frankly, I prefer the latter. So we need to go into this part of the process later on during the term. Your job, then, is look at all of this with the hope that it will help you with that lesson on story boarding. The great thing about all of this is that we can actually utilize what we learned in Photoshop to help us create the storyboards. In that lesson we will cover the use of still imagery as our basic slide show. You do not have to be an artist to create the visuals. But the process of organizing your thoughts is critical to creating a great video… onward!

EME 6209 – Introduction to Digital Imagery

Wednesday, May 31st, 2017

Introduction to Digital Graphics

Digital Graphics are an important component for multimedia projects. You would probably agree that a “picture is worth a thousand words” and most of us find that reading a lot of text online is pretty boring!

Before we get into the actual graphics projects, let’s take a quick look at some of the theory behind the digitization storage, and manipulation of digital images.

Here is an introduction to the process of digitization. It is a shortened version of a lecture I used to deliver to digital media students in a series of 2hr class sessions:

Part One

Next, is a similar lecture on storing/processing images:

Part Two

The last lecture covers manipulating images and the differences between raster and vector graphics:

Part Two


Alternative Imaging Software – Paint

Paint is an good open source program for PC’s. Its functionality has evolved nicely over recent years.

Paint is also a standalone freebie product offered on all Windows PCs.

Remember, at the end of the term you are going to be making a final project using ONE of the multimedia programs we cover. Keep that in mind when you select the graphics program to download. If you think you are going to submit a graphics project, then any 30 day trials might expire prior to your submitting your project.

Digging deeper – Making Overlays/Transparencies

This intent of doing overlay (where you insert one image over the top of another, like what is being asked for in the tee shorts/bulletin boards) is to make certain portions of the top image(s) transparent so the bottom image(s) show through. This is not as difficult as it once was. One way, of course is to make the top image smaller than the bottom. You may also get lucky and find am image that has already been altered with a transparency. But chances are you will have to create your own.

There are various ways to make this happen:

  1. With most office products, you can insert one image on a slide over the top of another and easily make and white space on the top image transparent, then save/export the slide to a .jpg file.
  2. Clipping Magic is a free online tool to remove a background from an image.
  3. Luna Pic is another online service.
  4. Perhaps my favorite is Photoscissors by Wondershare ($19.99). Inexpensive (ad you control it because resides on your computer).

EME 6209 – Audio Editing

Wednesday, May 17th, 2017

After Completing this Lesson You are Expected to Do the two Activities

Introduction

Here is a short, lighthearted overview of the fundamentals of sound



Sound is Caused from Vibration

Sound moves in a spherical three-dimensional pattern away from its source at 1130 ft per/sec in the form of a wave. It has peaks (highs) and troughs (lows) that build or cancel one another to create complex wave forms… (some of which) are audible to us.

Recall the Lesson on the Electromagnetic Spectrum

Recall the lesson on the electromagnetic Spectrum. Sound is at the lowest level of the spectrum and that which we have the capability of hearing (recall that some sounds cannot be detected by humans… such as that which is produced by a ‘silent’ dog whistle).
2000px-EM_Spectrum_Properties_edit.svg


Audio (or is it ‘sound??’) is an important component for any multimedia project. Audio can enhance or detract from your student’s enjoyment/understanding of the product. As an experiment, try watching a TV show or a movie for a while without audio. You will notice things visually that you may have missed previously. On the other hand you may not understand what is going on.. especially if you cannot read lips.

In this course, we want to consider ways that audio is useful and apply it appropriately. That is not always easy and you need to have some basic audio editing skills to accomplish this. Most often audio is left to the last moment in a project or forgotten altogether. In honor of this normal oversight, we are starting with audio. A good video is made better with proper voice overs that are ‘normalized’ (i.e., presented at the correct and appropriate amplitude/volume).

There are two kinds of audio in a video… diagetic and non-diagetic. These are very fancy words. The former simply refers to those sounds in nature (i.e., ambient sounds). The latter to those that are artificially added afterward in post production (voice overs, special effects, etc.) These can be done while shooting a video but most often non-diagetic sounds are added later. In some cases, you might wish to remove the ambient/diagetic sounds altogether. Then there is the issue of balancing the two.. Sometimes you need for music to die down as you begin to do your voice over. You must pay attention to this during production because once your project is rendered (i.e., finalized ) it is almost impossible to fix this interaction between the two types of audio in your final project. When we get to our video lesson we will show you some tricks on how to insert both ambient sounds and non-diagetic sounds and balance the two. First, we need to work with audio on its own to become familiar with how to edit it correctly. We start with microphones….

What are Transducers?

Transducers are used in microphones and speakers to convert one type of energy (electric) into another (sound). Microphones and speakers are electro-acoustic transducers that convert sound into an electrical signal, and electrical signal back into into sound. In other words, speakers are just microphones in reverse.

The three most common types of transducers used in audio recording are:

1. Dynamic
[one_third].[/one_third][one_third]dynamic

2. Condenser

[one_third].[/one_third][one_third]condenser-300x182

3. Ribbon

[one_third].[/one_third][one_third]

Microphone graphics courtesy of http://churmura.com/

Theory of Digital Sound

*Source: http://hammersound.net

Ok we have done our bit to cover analog sound/audio and microphones. Now lets take a look at digitized sound. Before we do, we need to talk a bit about the elements that make up sound and our abilities to actually hear it.

Scientists assume that the human ear is able to discern frequencies between 20Hz-20000Hz, since those numbers make their calculations a lot easier.

Here are a few examples of different frequencies:

60 Hz 440 Hz 4000Hz 13000Hz 20000Hz
-very- low A’ audible ouch! too high

Another very important property of sound is its level; most people call it volume. It is measured in dB (=deciBell, named after Alexander Graham Bell, the inventor of the telephone).

So why don’t we measure loudness in Bell instead of deciBell? Mainly because your ear really can discern an incredible amount (1.200.000.000.000, that’s 11 zeroes) of different loudness levels, so they had to think of a trick to describe an incredible range with only a few numbers. They agreed to use 10th’s of Bells, deciBells, dB, instead of Bells.

Most professional audio equipment uses a VU meter (=Volume Unit meter) which shows you the input or output level of your equipment. This is very convenient, but only if you know how to use it: A general rule is to set up the input and output levels of your equipment so that the loudest part of the piece you want to record/play approaches the 0dB lights. It is important to stay on the lower side of 0dB, because if you don’t, your sound will be distorted badly and there’s no way to restore that. If you’re recording to (analog!) tape, instead of (digital) hard disk, you can increase the levels a bit, there is enough so-called ‘headroom’ (=ability to amplify a little more without distortion) to push the VU-meters to +6dB. There is some more information on calibrating equipment levels in the recording section below.

Some examples of different levels, if you’d like to play with them for a while:

0,0dB = 100% -6,0dB = 50,0% -18,0dB = 12,5% +6,0dB = 200%
maximum level half power very quiet a little too loud-a lot of distortion

When digitally recording your audio there are two main settings that define the quality of the audio waveform:

  • Bit Depth is the amount of electronic voltages used to describe the depth of the audio. Directly related to the amount of dynamic range.
  • A Sample Rate is the number of ‘snapshots’/samples of the audio taken over a given period of time. Higher rates equate to smoothness in the audio, which in turn affects the amount of lows to highs that we hear.
  • Standard Redbook CD settings for bit depth and sample rate are 16bit (a sample rate of 44.1khz). DVD’s are usually measured as 24 bit (a sample rate of 48khz — 192khz).

Why is this important? When you do your audio projects you will have the opportunity to set the levels in order to make the sound play well based on your computer quality. This is also the way one can control ‘normalization’ .. a means to make sure all audio plays back at a consistent level among different parts of a movie or different movies.. ever notice when you watch a TV show and all of a sudden the commercial BLASTS at you? While this is intentional, you could normalize everything so you would not have to constantly adjust the volume on your set as you watch things.

Sampling

The sample rate of a piece of digital audio is defined as ‘the number of samples recorded per second’. Sample rates are measured in sampling units of frequency called Hertz (Hz, or kHz … kiloHertz, a thousand samples per second) (named after Heinrich Rudolf Hertz, the first person to provide conclusive proof of the existence of electromagnetic waves). The most common sample rates used in multimedia applications are:

8000 Hz 11025 Hz 22050 Hz 32000 Hz 44100 Hz 48000 Hz
really yucky not much better only use it if you have to only a couple of old samplers Perfect!! some audio cards, DAT recorders

How Much Sampling is Enough?

Nyquist Theorem states that the sampling frequency must be at least twice the frequency range. So if human hearing is in the 20hz — 20khz range, at the very least 44.1khz is the minimum sampling rate… Higher sample rates are used to describe the more emotional frequencies, (i.e., ‘harmonics’) that we perceive but are outside of our audible range.

Dynamic range

The capacity of digital audio cards is measured in bits, e.g. 8-bit sound cards, 16-bit sound cards. The number of bits a sound cards can manage tells you something about how accurately it can record sound: it tells you how many differences it can detect. Each extra bit on a sound cards gives you another 6dB of accurately represented sound (Why? Well, Because. It’s just a way of nature). This means 8-bit sound cards have a dynamic range(=difference between the softest possible signal and the loudest possible signal) of 8x6dB=48dB. Not a lot, since people can hear up to 120dB. So, people invented 16-bit audio, which gives us 16x6dB=96dB. That’s still not 120dB, but as you know, CD’s sound really good, compared to tapes. Some freaks, that’s including myself ;-) want to be able to make full use of the ear’s potentials by spending money on sound cards with 18-bit, 20-bit, or even 24-bit or 32-bit ADC’s (Analog to Digital Converters, the gadgets that create the actual sample) which gives them dynamic ranges of 108dB, 120dB, or even 144dB or 192dB.

Unfortunately, all of the dynamic ranges mentioned are strictly theoretical maximum levels. There’s absolutely no way in the world you’ll get 96 dB out of a standard 16-bit multimedia sound card. Most professional audio card manufacturers are quite proud of a dynamic range over 90 dB on a 16 bit audio card. This is partly because of the fact that it’s not that easy to put a lot of electronic components on a small area without a lot of different physical laws trying to get attention. Induction, conduction or even bad connections or (very likely) cheap components simply aren’t very friendly to the dynamic range and overall quality of a sound card.

Quantization Noise

Back in the old days, when the first digital piano’s were put on the market, (most of us didn’t even live yet) nobody really wanted them. Why not? Such a cool and modern instrument, and you could even choose a different piano sound!

The problem with those things was that they weren’t as sophisticated as today’s digital music equipment. Mainly because they didn’t feature as many bits (and so they weren’t even half as dynamic as the real thing) but also because they had a very clearly rough edge at the end of the samples.

Imagine a piano sample like the one you see here. It slowly fades out until you here nothing.
At least, that’s what you’ll want… As you can see by looking at the two separate images, that’s not at all what you get… These images both are extreme close-ups of the same area of the original piano sample. The highest image could be the soft end of a piano tone. The lowest image however looks more like morse code than a piano sample! the sample has been converted to 8 bit, which leaves only 256 levels instead of the original 65536. The result is devastating.

Imagine playing the digital piano in a very soft and subtle way, what would you get? some futuristic composition for square waves! This froth is called quantization noise, because it is noise that is generated by (bad) quantization.

There is a way to prevent this from happening, though. While sampling the piano, the sound card can add a little noise to the signal (about 3-6 dB, that’s literally a bit of noise) which will help the signal to become a little louder. That way, it might just be big enough to get a little more realistic variation instead of a square wave. The funny part is that you won’t hear the noise, because it’s so soft and it doesn’t change as much as the recorded signal, so your ears automatically forget it. This technique is called dithering. It is also used in some graphics programs e.g. for re-sizing an image.

Digging Deeper -The Digitization Process

For those of you who wish to dig a little deeper into the theory behind digital audio, you can watch this short videocast (runs approx 25 minutes). It is a shortened form of a 3 hr lecture that I used to introduce digital audio concepts to a digital media class.


Commonly Used Audio Codecs / File Types

  • mp3 is probably the most well-known audio format for consumer audio storage. While it is emerging as the ‘de-facto’ standard of digital audio compression for the transfer and playback of music on digital audio players, there still remain several licensing issues associated with this format.

Because of these licensing issues, many developers have come to issue their own proprietary formats. THIS EXPLAINS WHY WE often HAVE TO ADD A PLUG IN TO the editors in order to be able to convert a .wav file to .mp3 (WHICH IS A LICENSED FILE TYPE):

  • WAVe standard is the audio file container format used mainly in Windows PCs. It is commonly used for storing uncompressed (PCM), CD-quality sound files, which means that they can be large in size —around 10 MB per minute. WAVe files can also contain data encoded with a variety of (lossy — i.e., there is some data/quality loss) codecs to reduce the file size.
  • AIFF is the standard audio file format used by Apple. It could be considered the Apple equivalent of wav.
  • FLAC file format is a Free Lossless Audio Codec, a lossless compression codec, theoretically speaking, offers no data loss, which results in larger file sizes.

Audio Conversion

It is important to have some audio conversion software in your toolkit (to include the ability of a video editor that is able to handle only the sound) so you can tailor your audio file to your particular application. Most common file format for podcasts etc is .mp3. Many of the audio production programs create formats particular to their own platform, etc. (like .wav (PC) and .aiff (MAC) files). So, you will be faced with having to convert your files to another format.

If your video editor does not handle mp3 conversions I have listed below are some audio only programs to look into and some information on how to use your basic media players to convert your audio. Here are two commonly used (and free) ones:

iTunes generally uses .aiff files How to convert iTunes files

Podcasting

EME 6209 – Storyboarding and Edit Decision Lists

Friday, April 14th, 2017

After Completing this set of Readings You are Expected to Do the Following
dothis

To help you understand the importance of storyboarding in organizing a video/movie/television show, understand that most are not necessarily produced or shot in the order of its final presentation. For example, for scenes shot on a location the movie producer will organize all the scenes for that location to be shot all at once, regardless of the final editing scheme. Even if the story line chronologically takes the viewer back and forth to a remote location, all those scenes would be shot at once regardless of their place in the final story. All interior scenes are then shot together. Again, regardless of their place in the final story. In fact MOST movies/shows are shot out of order in this way.

A second use for storyboarding is to create video overlays or inserts over a video track that has been laid down on a time line. This is useful when a video/movie is shot using more than one camera. In the old days many shows were shot using a two camera arrangement (an ‘A’ camera and a ‘B’ camera). The main camera (A) was the main one used. The second camera would be used to shoot at a different angle (such as an over the shoulder shot) or a different scene entirely while the audio would continue along. In movie making these ‘inserts’ are called ‘B-roll footage’ (or simply B-roll). The most common usage would be in a newscast, for example, when the anchor is reading the news about a topic (say a car accident). While he or she is talking, a b-roll is inserted at strategic points in the story.

To do this activity this you need to decide on your video editing software. You probably did already when you created your audio file.

Edit Decision Lists (EDL)

Regardless of your platform, our activity is intended for you to understand the importance of a storyboard. In editing we use a special kind of storyboard that we call an Edit Decision List (EDL) to help keep things in order.

Your job is to download a folder that contains 13 tracks: a main video track, several insert video clips, and three audio inserts. Because of format/codec differences we have three folders… one for Windows, one for MAC desktops and one for iPads. Click on the appropriate link then right click on each file to download it to your computer.

In all three cases, the scenes are listed out of order/jumbled. There is one timeline clip (a-roll footage.. easy to find.. has the word master embedded in the title). Most of the rest of the clips are inserts/b-roll, where you utilize the overlay function to insert/overlay them over the top of the primary a-roll that you lay down first on to the primary video track (sometimes labelled ‘Video 1’). It carries all of the dialog. There are also three audio overlays (.mp3) that need to go in their appropriate spots on the timeline.

Alternatively, you could simply cut the timeline and insert the overlay video on the same timeline. If you the editing in this way you need to make sure the audio timeline is not cut (except for the spots where the .mp3 files go).

Your job is to assemble the scenes into a coherent final video, save it as a single mp4 or flv file and then upload it using the Easy Uploader Function. DO NOT SEND UP A PROJECT FILE.. YOUR FINAL UPLOAD SHOULD BE A FINALIZED VERSION USING EITHER A MP4 OR FLV FORMAT.

Select the usual choices for the course, term, sections etc, and and select storyboard as the assignment in that drop down. Name the file  video_assembly  (do NOT place your name in the file name). Post your confirmation on Canvas.

Please take your time with this. Your first task is to view the videos. This activity may take you up to four hours to complete so don’t get frustrated! The message here is that without a storyboard/EDL this can be a giant task. If a storyboard/EDL were provided it might only take about an hour or so to do.


What are Storyboards Used For?

Storyboards are helpful organizers for creating your time-based media. They can be completed after you write the script or as something you do as an outline before writing it. The storyboards shows what you want the scene to look like with more detail than the script. In it, you are drawing out visually an entire scene or part of one. It is helpful because you are showing what is going on and depicting camera angles and camera shots at the same time.

You can become very creative with your storyboards by drawing the scenes in sequence, giving various camera shots of characters and camera movements. Because the storyboard is a visual representation of the video, it will help you decide on what kinds of images and video clips you will be needing to add to the footage you actually shoot.

In some cases you will be drawing your own images to place on the storyboard. But you are also allowed (and encouraged) to find still images that help you visualize the scene, especially if you are not that great of an artist.

Storyboards serve a very important function in movie making.. in that storyboards are a whole lot cheaper than bringing the entire crew to the set and be waiting around for the director to make creative decisions from the script. Because the storyboard is part of the planning process, there will be several instances where the actual video will end up being different. These differences are the result of decisions made at the time of shooting and may be due to complications and other issues that come up when actually trying to create the scene.

The videos below display the point. On the right is the storyboard from the book trailer Alas Babylon. On the left is the actual trailer. While the resulting video generally follows the story line, issues arose during the shoot. For example, in the opening scenes the plan was to have a street scene with people going about their business as usual. Because the team was having difficulty getting enough people to volunteer for the scene, it was re-shot as an empty street scene with an empty swing, moving in the breeze. While this scene came out slightly different, it actually was more effective in the way it was shot, creating a feeling of emptiness and desperation. See if you can find other differences.

The best way to view the videos below is side by side. First click on the trailer on the left then immediately click on the storyboard video on the right. The timing will be a little off a but it is an effective demonstration.

Alas Babylon Trailer Alas Babylon Storyboard

Animatics

Sometimes the storyboard can be created in the form of an animatic. An animatic is an ‘animated’ storyboard in which the images, actually move. While these movements are rather crude, they represent better the camera shots and movements within the scene. Animatics are often utilized during the planning stages of animated films because they are very much cheaper to produce.

Here is a cute video from YouTube that shows an animated story of line drawings.. it has no dialog but the music fits.. as you can see, if this organizes the final product, the inserting the finalized images into the storyline in exchange for the line drawings, your final video is pretty powerful:

Below is another great example of an animatic. This one was created by one of our Digital Media students, Taylor Gorman, who was using the animation to add robustness to the content of his Me-Story video project. He did such a wonderful job creating the animatic, it actually can stand on its own as a ‘feature’. On the left is the original animatic. Once you have played it, click on the right video to see how he incorporated it into his final video.

Gorman Animatic Gorman Me-Story

Storyboard Pitch

The most common use for storyboards in animated films is the storyboard ‘pitch’. This is a session at which movie production teams present of ‘pitch’ their concept for a particular scene or event to be included in the final film. Most pitches are not included for various reasons, even though the concept is very well done. An example of such a real life ‘pitch’ is shown at the left below. It is for a scene for the movie Shrek that never made the film.

The video on the right is a video made of a series of storyboard pitches from students at a local middle school that we were working with on the video My Name Was Keoko. After being shown the Shrek pitch, these students were assigned to present their team’s project idea in front of the class, mimicking the Shrek concept. It is at these sessions when you, as the teacher, can critique the conceptualization the students have for the book, test them on whether they have actually read it and have made proper decisions. It is also your opportunity to determine whether the students will be able to actually produce what they imagine or have ‘over-engineered’ their concept in terms of what they have technical competence to do and what can be done in the time frame you have allotted for the process. Very often you can make suggestions as to how to accomplish what they intend visually on the video long before they have invested too much time and energy in a project for which they lack expertise. You can also cover once again the concept of original content and copyrights,etc.

Keoko Storyboard Pitch Shrek Storyboard Pitch
Digging Deeper

Storyboard and the Director

Sometimes an entire movie (including its script) can be driven by a visualization of the plot. Decisions made to the point where the intent of the story can be modified (including the script) once the director gets a visual picture of how the movie is to be shot. The story board becomes the director/screenwriter of sorts. Each shot angle value implies a certain meaning or mood and once the director gets the idea as to which of these works best he will sometimes decide to actually alter the movie. One such director who utilizes the storyboard in the way is M. Night Shyamalan in the Sixth Sense:

Click below additional information on  using storyboards

Storyboarding Job Aid

storyboarding