Archive for the ‘ds106’ Category

 

Digicamp Radio Piece

Saturday, July 7th, 2012

My initial idea for a DS106radio segment was to do a cooking segment.  Yes, I’ve watched too much Food Network and especially too much Good Eats.  I have a picture of me and Alton Brown hanging in my kitchen and I wanted to do a great grilled summer dinner, and document it for audio.  In the end though it was a rainy week in SoFL (but not as rainy as VA!)and there was little opportunity to grill.

At the same time I was going to work each day at DigiCamp in Boca Raton.  It was my second week of the summer – Advanced Video Production.  Basically, its 6 hours a day where kids work with each other on projects of their own choosing with whatever camera and computer equipment they need to create video.  There are three technology oriented teachers (I’m one of them) there to help them with technical issue and to push their creativity, along with a bunch of college and high school students who all took Mark Stansell’s Video Production classes in middle school and are experts in things like IMovie and Final Cut Pro.  Together we try to enable the kids to be creative and give them the tools to take what is in their imaginations and put it into video.  I was having so much fun as part of the camp, and it seemed to fit into the Summer Fun theme, so I threw together a piece on Digicamp.

I wanted it to sound like something you would hear on NPR, and I think the biggest thing I was lacking was conversations with the kids and counselors at camp.  I captured some of my interactions with students, and some of their interactions with each other.  There was too much of myself narrating, where I could have interviewed more participants and have them tell more of the story instead of me.

Technically, I used iMovie on my iPad to record everything, even though I didn’t need video. I figured having the video attached to the audio would make the reviewing and editing process better, and I think I was right.  Putting together what I had, and adding my narration by just recording myself with the front iPad camera and mic worked out well.  I exported the project to my MacBook, and then just extracted the audio.  The video part was pretty much unwatchable!

 

 

 

A little metal for #457

Saturday, July 7th, 2012

Not only is this a bastardization of video assignment 457, but it’s also kind of a cheat since I did this video a handful of years ago.  But I’m not a student, and I’m not working for a grade, so I get to do what I want to do.

The rules guidelines for video assignment #457 read like this:

Take a favorite cartoon or anime like show, take some clips, mash them together and add music to it. Try to pick out a specific theme in your clips that follow the theme of the music. Keep an eye out for changes in the music and plan your clips accordingly.

Here’s my interpretation:

Take a favorite cartoon or anime like show, take some clips, mash them together and add music to it. Try to pick out a specific theme in your clips that follow the theme of the music. Keep an eye out for changes in the music and plan your clips accordingly.

I met some of the criteria.

Anyways…  Battlemaster is a Richmond black metal band.  They are wonderful.  Some of the sweetest guys you’ll ever meet.  Back in 2007 they put out Warthirsting & Winterbound, which included the song, “This Mead Is Making Me Warlike.”  When played live, this song really rallied the audience.  It was met with singing, the raising of PBR cans in the air, and sometimes the tossing of said PBR cans.

I took the song and mixed it with public domain footage from Archive.org.  I haven’t looked at this video in years.  The assortment of clips is truly, truly bizarre.  So here we go:

Thanks, DS106, for making me pull this project off the shelf.

AI and Design Theory: Before and After

Saturday, July 7th, 2012

3 of 5 starsDesign Assignment 366: Design a film-based cover using the aesthetic framed by Spacesick.

I have admired Spacesick’s ingenious “I Can Read Movies” book covers and the amazing interpretations by DS106ers like Chad and Scottlo. Well, I like a challenge so I set my sights high with this assignment.

I know to admire good design but have never studied how to create it, so I decided to try a bit of an experiment and design my book cover green with no formal study; then go on the Design Safari to study design elements, and then return to my original design to see if I would change anything.

I certainly had a good time on my Design Safari or “Search and Rescue” as I came to refer to it, and I know what the elements are and can conveniently refer back to them with my design trading cards but I can see that it’s going to take a lot more practice to understand how to apply them successfully. I was curious when reading Roger Ebert’s brilliant “How to Read a Movie” about his hypothesis that directors don’t intentionally consider visual strategies when composing their shots.

I have never heard of a director or cinematographer who ever consciously applied them. I suspect that filmmakers compose shots from images that well up emotionally, instinctively or strategically, just as a good pianist never thinks about the notes. It may be that intrinsic weighting is sort of hard-wired.

I’d like to think that the “hard-wired” intrinsic weighting can be achieved after much thoughtful study and practice until a degree of automaticity is reached. That’s the hopeful, eternal optimist perspective.

Inspiration
Spielberg and Kubrick’s AI is a movie that continues to play in my head, always conjured up when I’m amazed by some new technology and how the “axemaker’s gift” may be changing what it fundamentally means to be human. I liked the idea of trying to design a novel cover that would play on the human/artificial human tension.

Process and Reflections

I measured a Spacesick’s original to get my dimensions right but stopped short of copying her template. She’s opted for “All rights reserved” and it seemed questionable to copy anything but her idea. I could be overly cautious but that’s the side I’d rather err on. I think if she had wanted to encourage copying that she would have chosen a Creative Commons license.

Using GIMP, I chose a digital background, simply typed in a giant A and I, and added a small image of the blue fairy and the android-boy that I colorized to blend in. So you see my simple, original cover on the left.

After my Design Safari, I tried hard to come up with some clever way to improve the design. No go though I do understand now some of the elements I used unintentionally. Those include the alignment of the A and I so it almost seems house-like to represent the home that the android-boy wanted so badly (metaphor, symbol) as well as the digital background to reflect the AI component. No breakthrough aha’s. I’m a bit better at GIMP now so I was able to resize the fairy-boy image to fit in the A and attempted to distress the digital background a little with the smudge tool.

AI movie with blue fairy and boyAI letters with blue fairy and boy icons

I’m going to continue to try and apply the design elements and I’m especially interested in gestalt theory and how that might affect the design process. If you have resources you’d recommend, I’m all eyes.

Storytelling as Learning Tool

Friday, July 6th, 2012

On Monday I’ll be giving a brief talk to the Langley cohort of Simon Fraser University’s Learning & Teaching with Technology Field Program about Personal Narratives as a framework for learning. Not particularly adept at the nomenclature surrounding and separating ‘frameworks,’ lenses, methods, and mostly considering myself self-taught when it comes to this stuff, I have long-found stories to be a vital part of my teaching bag-of-tricks, and will be sharing some of what I’ve found along the way with the group. As an introduction, I’ve shared the following post on the class’ Posterous account, but it’s private; so I’ve shared it here in the hope that those of you out there – who have really done all the teaching in this supposed ‘self-teaching’ I’ve been doing – might leave us a comment, a story, a link to some reading, or pass this post along to someone who might. 

Why Sharing Our Stories Matters: Story by Bryan Jackson from unplugd on Vimeo.

As a means of collecting some of the supplemental material I would attach to a discussion of Personal Narratives and Storytelling in the classroom, I thought I would put together a post here that you may find useful in extending the conversation post-”Institute.”

As a general introduction, the above video is a story I told in a canoe in Algonquin Park last summer at the Unplug’d Education Summit. The purpose of the “un-conference” was to bring together educational stake-holders to synthesize our individual essays (each filling the blank in the title, Why _________ Matters) into a book organized by thematically grouped chapters. You can download the e-book here, and learn more about this year’s event at Unplugd.ca.

While the whole process revolved around a socially constructive framework, my essay centered around the idea that “Sharing our Stories” matters: that each of our individual truths construct a shared “truth” or objectivity; and that if we follow this through to its logical conclusion, the skills required to realize, share and synthesize our stories become essentials in creating a healthy culture (democratic, social, educational or otherwise).

From both a personal and pedagogical perspective, this aspect of joining the personal and the collective through stories holds great interest for me, especially as we consider that our digital tools provide ever-more opprortunities to share unique pieces from our individual corners of the world with tribes and swarms and communities beyond our own local geography. Indeed:

…our understanding of authorship is, at the present time, caught between two regimes: one a system of knowledge production informed by Enlightenment-era notions of the self, the other is a world of “technologies that lend themselves to the distributed, the collective, the process-oriented, the anonymous, the remix.” As we step into the future increasingly governed by the latter, we move, in some ways, back to an earlier era: a move away from a culture of isolated reading — the individual reader, alone with a book or a screen — towards a more communal engagement, the coffee-house or fireside model of public reading and debate in which literary culture historically originated. Long before print culture, storytelling was not a solitary experience but a group event.

Houman Barekat on Planned Obsolescence 

In its more classical sense, education concerned itself almost exclusively with Aesthetics, or the “broader sense” that Wikipedia describes as “critical reflection on art, culture and nature. Educators today would do well to be aware of an emerging New Aesthetic (which is described here in a specific fashion that need not be completely digested or accepted to be relevent to our discussion).

Simply put, the New Aesthetic concerns itself with how the digital world and the real world are starting to overlap and intermingle in interesting, routine and unexpected ways.  As search engines, online ‘bots’, spam generation engines, online mapping tools, google street view, machine vision and sensing technologies proliferate, our everyday life in the western technologically advanced world is starting to bristle with new types of augmentation and hybridity.

Interview with Bruce Sterling about the New Aesthetic

As we move into next week, I hope we can play around with some of these emerging tools to begin to tell our own stories and begin to create possibilities for storytelling (digital or otherwise) as a means of individual and collective learning in your classrooms.

The main point I like to stress in talking about storytelling in our emerging media/digital landscape is that despite our new modes of communication, the act of telling our individual and communal stories is fundamental to the creation and maintenance of our culture and in this way is at the center of what education strives to achieve.

As one of my teaching idols told me on the day he retired, “Any class you teach is just another opportunity for kids to practice forming communities,” a sentiment I find myself agreeing with more the longer I teach, and a process in which I find stories increasingly fundamental.

 

DS106 Radio

Friday, July 6th, 2012

I had a great time putting together my little group’s ds106radio show last weekend.  Cris and Kavon, my bunkmates from the Naked Lunch Bunch, each submitted a summer-themed piece to me (and did it on time!) which were completely different from each other as well as from mine.  They all fit together nicely, I think!  Here’s how I put it all together.

I used GarageBand on my MacBook to put all of our stuff together, as well as to add background music and voiceovers.  I’ve used Audacity before, but I teach Garageband to the group of teachers I train on technology throughout the year, and am really comfortable with it.  That being said, I don’t really use it that often, and it was nice to have a real purpose for it, not just to use it while training!

I created a new podcast, and set up three tracks.  One for the finished segments (mine and one other was .mp3, a third was .wav, GB handled them all fine). The second was for background music, and the third was for my “hosting” voiceover.  I had fun finding the background music, starting by searching for “Summer” in my iTunes library which yielded the Elvis Costello and Jonathan Colton songs.  I wanted something more campy (as in having to do with summer camp) for my piece, so I found the theme to Meatballs with the nice little bit from Bill Murray in it on YouTube and used Audio Hijack to snatch the audio part from it.  I supposed I could have downloaded the YouTube video and saved out the audio from Quicktime, but cueing up to the bit I wanted and just capturing it with Audio Hijack seemed to save a couple of steps.

I had the hardest time setting the levels to match up our tracks with each other and the voiceover and music.  I listened, tweaked, listened, tweaked… in the end it is OK, but not perfect.  Even with video, the audio part is the most challenging unless you have really goed equipment.  Even a little usb microphone, like my Blue Snowflake, makes a big difference in my own recording.  Still, whenever I get things from different sources I seem to have trouble.

The final bit I did was to go on DS106radio live to introduce and then discuss the project with Martha.  She was a wonderful host, and helped me feel not-so-stupid when my lack of headphones and incorrect audio setup on my computer resulted in feedback issues over Skype.  Martha made me feel comfortable and lead a fun conversation (at least for me, not sure how we sounded to everyone else!).  Thanks Martha!

So I’ve added DS106radio to my little media center in the kitchen where I generally listen to podcasts or radio while cooking and cleaning.  I’m enjoying it thouroghly and have even started thinking of a few ideas for some music shows to produce and upload.  Stay tuned!

My Newest Rube Goldberg ds106 Broadcast Machine: Ladiocast + Nicecast

Friday, July 6th, 2012


cc licensed ( BY NC SA ) flickr photo shared by sebastien.barre

For this week’s live broadcast of the ds106 group radio shows I came up with an entirely new contraption for managing different sources, in contrat to my previous methods of using Nicecast.

I still prefer using Nicecast, and do so for voice broadcasts or voice mixed with music (DJing)- but in the past I have ran into problems doing it with something like Skype as one of the sources, though I have mixed it in with the application mixer, the missing piece is for someone on Skype to be able to hear back what is on the stream.

The new thing in the mix is the free Icecast broadcastng software, Ladiocast. I cobbled together the approach below with a combination of Tim Owen’s Ladiocast tutorial (critical for setting up Ladiocast) and Scott Lockman’s setup for doing his Second Life to Google Hangout broadcasts.

This post is a long scroller! Go long!

So I got all of that working, without actually too much sweat; the missing piece of the equation in Ladiocast is the lack of archiving, and here is my foreheadslap idea- I used Ladiocast merely as my audio mixer, and sent out the audio from there as input to Nicecast, which becomes the broadcaster and automatic archiver.

I had made the mistake of downloading Ladiocast from the Apple App Store, and tried a few broadcasts- it felt like I was broadcasting, but the ds106 autodj never registered in and also, people were saying it was dead air. I then realized my Ladiocast did not have the mp3 encoder as an option.

So I went back to Tim’s tutorial and repeated his critical steps.

  • Install the 0.10.4 version of Ladiocast, not the latest pne (a copy can be downloaded from Tim’s server)
  • Install Soundflower What is this? Its a tool that creates virtual audio channels on your mac – it means you can take the output of one application, send it to Soundflower, and then use that same source as input elsewhere. Its like a mixer of audio sources on your computer.
  • Download the copy of the MP3 encoder library used for Audacity (or alternatively make a copy of the one you have). This is what makes mp3 encoding available in Ladiocast.

This file libmp3lame.dylib needs to be installed on your computer in the directory /usr/local/lib (Audacity puts a version in /usr/local/lib/audacity. You can get to this dorectory from the Apple Finder via selecting Folder from the Go menu. If you have not installed the encoder for audacity, drag the new file to this directory:

I simply did an option drag to copy it from the /usr/local/lib/audacity folder to this other location:

Now we will set up our sound channels. First, we want to set the default output for System audio to Soundflower (16ch) (the 16 means it can accept 8 stereo audio sources in a mix:

What this means is that most applications will now send sound to this source (meaning you will not hear it from your speakers or your headphones).

I said most- Skype can be finicky, and you will manually have to change the output to strong>Soundflower (16ch) from Preferences -> Audio/video:

Note that I have also slid the volume to its maximum setting (just in case you call soneone with low volumes- we want max throughput from Skype and we will adjust the output volume elsewhere).

We will make one more setting change here later. Now, let’s set up Ladiocast. This software provides three different sources for input, and three for output (though we only need 2 and 2). Each also has a levels control so we can turn any source down or fine tune for levels. Generally you want volumes just below the part where the meters peak in red.

Inout 1 is out Soundflower (16ch) meaning a mix of all audio sources on our computer (which will also include system alerts, so mind your IM notifications). Input 2 is whatever you use for a microphone, in this case it is the external mic of my iPhone headset (You will use a headset, you will use a headset, you will not use the open mic of your laptop, right?).

So all the input sound to Ladiocast is from our computer and our microphone. Each of these sources is mapped (see arrows above) to our two output sources (Main and Aux Output 1).

Here is where it gets trippy- we are outputting sounce to our other Soundflower source, Soundflower (2ch)- this is completely separate from Soundflower (16ch). We can then use this mix of the microphone and computer audio as source for our broadcast. The mix is defined by the combination of all volumes (volume controls in apps like iTunes, Skype) and the Ladiocast input levels. Our second output, Aux Output 1, goes to headphones so we can monitor all output- this I generally keep at a medium or low level as it is just a test to make sure audio is working – there is really no delay in your own audio mim (as you hear in Nicecast).

Now here is the one trick and something I do not think is possible in Nicecast– for the people calling in from Skype, we want them to be able to hear everything that is broadcast back out to ds106 radio (including my voice too). Are you ready for this trippy hippy step? Can you guess?

Go back to your Skype audio preferences, and change the input source to be… Soundflower (2ch):

Do you get it? Any sound we are broadcasting is going to Soundflower (2ch), including audio from iTunes, my mic- and that combination works as an input back to Skype.

By now we have everything to set up to play audio from any application on our computer, music in itunes, a Youtube video in Chroms, and people we call in Skype. Here’s a sketch I made of it.

(click for full size)

The next thing to do is set up Ladiocast to broadcast to ds106, using the settings found in the ds106radio document. In Ladiocast, under the Streamer menu, go to Streamer 1 then Icecast:

Here the settings for the Connection tab:

The stream name is what will be picked up by the @ds106radio twitterbot, include your own twitter handle so people can tweet back.

Confirm the settings under the Encoding tab (my first efforts were wrong because I did not have an mp3 option, that was why we did all that stuff to move that libmp3lame.dylib file around):

That’s it, everything is set, all you need to do is click “Connect”…

Well I went one step farther… because I want to save an archive. That’s where I got the idea to skip the broadcast option in Ladiocast, and use Nicecast. All I needed to do is set the source in Nicecast to … (any guesses) (that’s right– Soundflower (2ch)):

You should update the Info tab as well to reflect the stream info, and of course set up Nicecast with all the broadcast options on ds106. If you open the Archive window in Nicecast and click the Archive button, it will automagically archive any broadcast session, storing files in your ~/Music/Nicecast Archive folder.

Now just turn that broadcast on!

Depending on how you are playing audio, you are still needing to dance around windows- iTunes to manage audio/playlists maybe to drop levels, Skype to initiate calls. I kept the Ladiocast main window available so I could tweak levels if a source was exceptionally low or high. Skype can be notorius for some people having low input volumes- here is another little trick I found in Ladiocast– if you click the uparrow on the little nub that reads “6db” you can double, triple the audio amplitude (to 12 or 18db or more):

This allowed me to compensate for Skype talkers with low inout volume, or audio segments recorded at low levels. I have to say, the most challenging part of these audio projects is getting people to understand the importance of proper recording levels, and making it even between different segments. It not only makes for better audio, it enables your broadcaster not to spend so much time and attention fiddling knobs and buttons.

This set up worked perfectly this week- I had all of the audio in an iTunes playlist (but I picked and chose the audio to play, both music segments and the student audio project files), I played a few YouTube movies for their audio track, and had two Skype call-ins, where the speakers got to here the entire segment played back through Skype. The proof is in the archive:

Tuesday Night ds106 Camp Magic Macguffin Radio Shows

Again, I will likely use Nicecast alone if I am DJing my own music, but if I need to do Skype call-ins or mix multiple sources, this is my new route. Just do not forget to reset the audio in Skype and system audio when you are done!

I am sure this is not the last time I will make a ds106 broadcasting tutorial.

Learn 13 Kinds of Practical Jokes at Camp Crystal Lake

Thursday, July 5th, 2012

Here is a new ds106 video assignment based on the notion of going to Camp Magic Macguffin for summer camp, Send a Camp Movie to Camp:

Take one of the movies on the list of movies with camp themes, find a trailer for it, and re-edit the audio to completely change the plot- e.g. make a horror movie turn into a comedy, or make a romantic movie seem like a spooky movie.

Send a camp movie to personality inversion camp!

I went easy, and selected the trailer for the original Friday the 13th, made in 1980, the first home of Camp Crystal Lake

I locked into this when it came out, I was still in high school, and could have easily been one of those wide eyed teens- one of which is a really young Kevin Bacon. Yeah, Jason rises out of the lake and takes his gross revenge, and it set in motion an entire industry of followups. I vividly recall nearly jumping out of my seat in the theater during the first one, maybe it was the Blair Witch Project of its time.

What does the fascination with horror movies say about us? It can be said it is a safer approach to playing out things in fictional form that are much worse than the horrors we witness (or ignore) in life? We could, and people do, speculate for eons.

So for my inversion of the movie, I watched the 13 countdowns of the original trailer, and wondered what if Camp Crystal Lake was the place to go to learn how to play practical jokes? A ha ha ha funny place.

This was pretty much a one take improv- I downloaded the youtube version of the trailer as an MP4, and opened it in iMovie. I used the Edit -> Mute Clip command to neutralize the audio of the original (you can also do a detach audio and then delete that track, but sometimes its better to just mute the original if you are going to re-do the whole thing. I then used the record over features to overlay my own dialogue, which you can tell is a one take. I had the most fun wit nod to the Oracle of Kevin Bacon phenomena.

To add some flair, I imported a few audio clips from Freesounds, a goofy orchestra track and a track of cellos to make it sound concertic.

Freesound.org “tune-up.wav” by bugfish
http://www.freesound.org/people/bugfish/sounds/135844/

Freesound.org “cellos three thords” by jus
http://www.freesound.org/people/jus/sounds/39557/

Ah camp.

Librarian archetypes in 5 movies and 18 seconds

Thursday, July 5th, 2012

I’m a librarian.  You probably guessed that.  We librarians are a dedicated bunch and tend to be defensive about what we do.  You would be too if you were constantly asked, “Why do I need a librarian when there’s Google.”  Yeah.  Screw you, buddy.

So here’s my take on the One Archetype, Five Movies, Five Seconds video assignment.

The librarians are from Ghostbusters, Party Girl (Parker Posey), The Matrix (Marcus Chong), Desk Set (Katharine Hepburn and Joan Blondell), and The Station Agent (Michelle Williams).  The music is “Marian the Librarian” from The Music Man.

I’ve heard it argued that though Tank is not actually called a librarian in The Matrix, but he does a lot of librarian-ish work.  So there you go.  I also think the scene is representative of the high-tech world in which librarians work (and dominate thank you very much).

Can you hear me now?: Audio and DS106

Thursday, July 5th, 2012

Most of my time was spent working on the radio show.

This is what I ended up with. I recorded the two interviews with my iPod Touch using the FiRe app. I thought the audio turned out really well, though the interview with Brooke is at a lower volume. I’m not sure if that had to do with the fact that we were in a restaurant or because I imported the audio file into Audacity as a .WAV and then exported the file from Audacity to Aviary’s Myna as a .MP3. I need to learn more about the file types. Will moving an audio file from one place to the other (and one file type to another) degrade quality? Questions to research.  I was going to use Audacity for this project, but I chickened out because of time constraints.  I opted to use Myna since I was more familiar with it.

The interview with Brooke was a good 40+ minutes long, and half the time I had no idea what he was talking about. That made editing difficult and time-consuming. The interview with Kate was about 3 minutes long, and she was pretty straight-forward with her post-apocalyptic vision.  Better planning could’ve made this project a littler easier, but I guess that can be said for a lot of things.

 

Mission: DS106 – Return to the Silent Era

Tuesday, July 3rd, 2012

Fugitive card

Fugitive card

Over the next few weeks, I’d like to look at some of my favorite crime films. Something about cops and robbers grabs me. Maybe it’s the loyalty and betrayal – or the chess match between good guys and bad guys racing to figure out where all the pieces fit before their counterparts do. Emotional and tactical thinking. Passion and prowess. Great directors and performances. Romanticizing childhood. The works.

For my Return to the Silent Era assignment, I decided to rework a trailer for Heat, which I watched three times in the theater when it came out. Since then I’ve watched it a kajillion times on VHS and DVD. Danny Trejo, Jeremy Piven, and Henry Rollins in one film! Hometown star Amy Brenneman!

I grabbed a trailer from YouTube using the Firefox add-on Download Helper.

Then, to begin producing the silent era trailer, I searched the Creative Commons pool on Flickr for a card I could use for titles, dialogue, and credits. I found this card and painted over its text in Acorn.

Next, I watched the trailer a few times to transcribe the dialogue and narration.

After that, I imported the trailer into iMovie and began to work on editing it.

It was a shame to strip out the soundtrack and dialogue. The trailer, like the film, is a gem of both audio and visual editing, pacing, and story-telling. I love the soundtrack, including Moby’s contributions. To preserve some of that pacing, I tried to edit the trailer in such a way that my title, credit, and dialogue cards occupied splits where new dialogue or dramatic beats existed in the original trailer.

After I stripped out the sound, I made the clip black and white. Then I exported it.

Next, I made my title, credit, and dialogue cards in Acorn. I edited out some lines from the original trailer. I also amended or otherwise edited some of the dialogue I used so that it would fit legibly on a card and not take too long to read. I used the STFangSong font because its serif has more curves than corners, like the border embellishment of the cards I used for text.

I opened iMovie again, imported the black and white version of the trailer, and edited in my cards. I made a new card for Regency because its original logo had some animation on it. I left the first Warner Bros. credit alone because it’s a static painting of an iconic trademark; it isn’t so jarringly modern as an animated logo.

After I worked in my cards, I exported the movie again. Then I imported that file back into iMovie. With every new import, I deleted older clips from the editing box in iMovie, but kept them inside the project library for reference so that I could, say, skim the original trailer to see who said what when.

I used the film grain effect in iMovie to age and deform the black and white trailer.

Then I searched freesound.org for piano riffs to use in the trailer. I wanted something elegiac that sounded unproduced (like a real piano) and that looped like Moby’s theme in the trailer. I found this clip, which had a tinny sound to it, almost as if it had been recorded and played-back on early, low quality audio equipment, or as if it had suffered over time from multiple-transfers to new audio formats on its way into my trailer. I opened Garage Band and played around with layering this loop with other samples from freesound.org, but I couldn’t make anything that wasn’t way too distracting. Ultimately, I just looped my sample for the length of the trailer. However, I wouldn’t mind trying to score it sometime in the future using Garage Band and a Korg Nano Key or something.

Finally, I exported the grainy, piano-looped silent era trailer, uploaded it, and shared it here.

In watching it again (and again and again as I worked), I think I could have edited out everyone but Pacino and De Niro. I suspect the narrative of this silent era trailer would have worked better in terms of pacing if it had focused exclusively on the main characters and the core dynamic of their relationship with one another. As it is, the silent era trailer feels a little long to me, but that might also be a function of the music I chose or something else that will hit me later.