Jack Neill Baker's works

Audio, Video, and Image Production. Portfolio pieces blended with tools and techniques.

...The focused then shifted new methods of streaming the captioning process with new software. The guest speaker specifically mentioned Presie, Otranscribe, Lemony and Amara.

The speaker argued that this information can be used by creators for more audience exposure with little time commitment and big payoffs for internet, broadcast, and network content- In my case web hosted videos are or importance and also are quickly changing in the ways of addressing accessibility. YouTube now analyses the audio in uploads to provide automatic captioning and translation to multiple languages- I wasn’t aware that captions are included in meta-data used for search results. This means using captions can expand my potential audience members during search queries here in the US as well as overseas after the meta-data of translated captions included. Ideally this ensures that a video’s quality and it’s relevance to the viewer determines search rankings by removing the language barriers from the equation.

Another way YouTube is addressing accessibility is by giving viewers the option of adjusting a videos playback rate. This can sometimes result in lower quality play back but I’ve been learning some ways an upload can work in harmony with the speed tool. If a viewer slows down footage that was shot in a progressive frame rate (ex. 30p), playback will be smoother and contain fewer artifacts than footage shot in an interlaced format (ex. 60i). Interlaced footage shoots two frames simultaneously- when played on an LCD screen the frames are literally stitched into a single frame- but when played on traditional CRT monitors and televisions the two frames flicker individually in the same amount of time. This difference becomes more relevant with YouTube’s playback rate Thistime an LCD would show the single stitched frame. . For years then interlaces or “weaves” them into a single frame when played thus cutting the total number of frames in half (30p = 60i). However, interlaced videos being played on traditional CRT monitors and televisions Interlacing frames playback best on the traditional (often large) CRT televisions and monitors because footage is played back (vs. LCD) Before the advent of LCD screens the old CRT monitors and televisions Usually there isn’t a problem using interlaced frame rates (network news is usually 60i) but now if YouTube viewers slow down playback seams where the two frames were stitched together are revealed. Some people call this “ghosting” and is more evident with fast moving subjects or in this case normal subjects that are slowed down .In recent years YouTube’s upload engine began deinterlacing frames automatically to prevent the ghosting artifacts, but there is a loss in quality which can range depending on the format the video was rendered in before upload(ex. .avi, .wmv, DV). To address this now I try to isolate what content is more likely to be slowed down so that I can shoot in a frame rate that will withstand the slow speeds. With fast lectures and tutorials viewers can find themselves watching the video repeatedly to catch missed information. I had to shoot a lecture or tutorial I’d use the quicker 60p rate so that viewers who slow down playback to save themselves from watching the video over and ove can record in 6­­­­­­0p (p for progressive) to accommodate for the new tool. Now I try to anticipate what kind of content is more likely to be slowed down like a lecture or tutorial where as before I’ve always liked using 24p and 30p framerates to give a more cinematic feel for narrative based videos. ­­­­lend themselves to a cinematic effect (narrative or where I as a 30p/24p for information and advertisement content where the cinematic appeal of 24p isn’t a priority for viewers. An example might be students taking notes on their professors’ fast-paced video lecture or a viewer trying to follow the steps in a concise video tutorial. I also just learned about the analytics provided by YouTube which reveal which the different places around the world individual videos are being watched. Soon I hope to tailor the audio in my videos for YouTubes’ automatic captioning system so that the subtitles (and resulting keyword metadata) contain less mistakes. I imagine the translated captions quickly become meaningless if the native captions don’t reflect the content because my audio format or quality aren’t suited towards a particular web host’s caption creator.