Code Code Code …. Review Review Review

Code freeze for the 0.5 release of Popcorn Maker/Butter is coming up Tuesday and we have a lot to do before we close the code merging flood gates. In addition to this week being 1 day shorter for most of us because of Victoria day, we also had people venturing off to a conference downtown Thursday making things even tighter schedule wise. Needless to say we have needed to pick up the load to nth level to make up for all the lost man power (or maybe it’s finger power?)!

This week I finished off a few things. One was adding a test harness system for Butter. Now we can have one file that will run all of our QUnit based tests all at once, helping make testing much easier. It will land Monday since it’s now the weekend and whatnot.

I also finished updating the the cornfield tests. They now are use the proper updated methods and on top of that fixing them exposed a few problems with our validation for passed in data. On top of that they are just plain better than they were before. A lot of places in the tests previous used timeouts to call QUnit’s start() method to QUnit know it could run each assertion which could cause potential issues. They now follow a “cascading” style if you will that will have subsequent calls to browserid to happen withing the callback function of the previous call, allowing us to know that the previous work has finished preventing any asynchronous issues. For example:

asyncTest("Async API", 7, function() {
 var foundProject = false;

 butter.cornfield.list( function( res ){
 ok(res, "The project list response has data" );
  equal( res.error, "okay", "Project list status is \"okay\"" );
  ok( res.projects, "There is a list of projects" );

  for( i = 0, len = res.projects.length; i < len; i++ ){
   if( res.projects[ i ].name === filename ){
    foundProject = true;
    break;
   }
  }

  equal( false, foundProject, filename + " is not present in the projects list" );

  butter.cornfield.load( filename, function( res ){
   deepEqual( res, { error: "project not found" }, "The project load response is project not found" );

   butter.cornfield.save(filename, stringedData, function(res){
    equal( res.error, "okay", "The project save response is okay" );

    filename = res.project._id;

    butter.cornfield.load(filename, function(res){
     deepEqual( JSON.parse(res.project), data.data, "The project is the same" );

     start();
    });
   });
  });
 });
});

With this setup we know start won’t be called until our servers has completed our requests with browserid and our database holding project information.

On top of that I took over the manual tests that we are making. These tests contain a list of instructions for people to run and what they expected outcome should be if they follow those instructions. They can then report if the expected results were achieved or not. Eventually these will be linked for a database of some sort where we can monitor the results people will be reporting. Most of the real issues that we had were because the tests themselves were being loaded into an iframe. This caused issues with the sources provided for various plugins and other scripts because they were relative links in our config file. I also realized we were missing some vendor prefixed versions of linear-gradient on the index.html page, causing the links for the manual tests not to appear in browsers that weren’t FF, making it look like nothing loaded at all.

On top of all this there were many reviews. While reviews require that the person doing the reviews takes their time looking over it and is a very important task, it also means that someone has sort of gained enough experience in the project that they can be “trusted” so to speak which is a nice sign that I am progressing a lot within the Butter and Popcorn projects.

Other Cool Stuff
We also have basically a complete rewrite to our flash players in the works. David Humphrey started the work to basically mimic the HTMLMediaElement spec and early testing is showing it’s going to work quite awesome.

We also have the Django front end coming up for the 0.5 release, slated to be available on June 15th. This will create a new template management system and bring a lot of new cool stuff to the project.

0.5 is slated to be about 200 tickets! Yikes.

Can’t wait for the rest 🙂

OCE Discovery and more Popcorn popping

This week has been a shorter work week for me in the typical sense. I was asked to go help represent Seneca College down at OCE Discovery 12, a conference held by Ontario Centres of Excellence aimed at bringing together key players from industry, academia, government, the investment community as well as entrepreneurs and students to pursue collaboration opportunities.

This event was very interesting for me. It was a great opportunity to work on my presentation skills, although not in the typical sense. One thing that was said to us back at the beginning of the summer was that we need to be able to talk about what we are working on to anyone at anytime; Be able to describe our project on a high level so that anyone could understand it. I spoke to people of varying knowledge levels: from nothing all the way to other programmers from other schools, such as Niagara Research.

However, I feel that something like Popcorn is hard to show off at that kind of event. All we have are computer monitors showing videos on screen. In particular we were demoing our open source work done in conjunction with Native Earth and that particular demo is not particularly eye caching. It would have been better to show off some of more flashier demos that are out there on the net or maybe make one ourselves. What might have been even better would be to have multiple computers setup allowing people to play with Popcorn Maker. That is something more people can understand and the fact that they would be creating something themselves would make it a more lasting experience.

Still, there was a lot of cool things on display, including but not limited to: the U of T electric car that was used as part of a race across Australia and an app that was able to read your brains waves measuring focus levels that got you to controls the physics of a game. Going to this kind f event has definitely given me more thoughts about the kinds of things I want to do in the future. I never would have thought this 3 months ago but perhaps the career of a tech evangelist is in my future!

    Oh Yeah, I Did Some Coding Too

This week consisted of more fixes for butters unit testing and many code reviews. We also spent a lot of time discussing key aspects of the Django app that is going to make its way into Popcorn Maker soon along with new UI/UX stuff. While I am learning a lot more when it comes to JavaScript, I’m still very humble in my skills levels. Projects like these keep you humble because the code can be very hard to follow and learn. I’m sure if someone else was handling my tickets it would take them 1/4 of the time but that’s why we are all here; to learn.

I’m looking forward to finishing this stuff up so I can switch gears back to UI testing now that we have additional tools available, such as David Humphrey’s addition of Firefox’s “special powers” chrome access JavaScript. Going to be awesome!

Implementing Testing for Popcorn Maker

For the foreseeable future my main focus is going to be beefing up the testing system in place for Popcorn Maker. As it stands Popcorn Maker doesn’t have the same sort of testing environment and infrastructure that Popcorn.js has and this needs to be changed. Some basic QUnit based tests are present however they are never run and most are severely out dated, basing themselves off of an older version of the Butter SDK behind all of this. This is due to the culture the project currently has and my goal is to be apart of the change behind it.

All the Areas that Need Testing

  • Comm
  • Dialog
  • EventEditor
  • Cornfield
  • PluginManager
  • Timeline
  • TrackEditor
  • UI Testing
  • Test Harness so all the above testing can be run easily all at once

Needless to say, it’s a long list. A lot of it involves areas of the Butter SDK that I don’t know much about. On top of that there doesn’t seem to this one framework that can handle everything we want to do on the UI Testing side all nice and neatly like QUnit does for everything else. I have played around with Selenium a lot and that proved to be quite the nightmare. I’m still not 100% certain if it’s what we want to use because there are definitely elements to it that work and a lot that seemingly don’t. Right now I’m playing around with two frameworks, called PhantomJS and CasperJS, which CasperJS is built on top of PhantomJS to provide additional functionality.

As it stands, CasperJS doesn’t appear to allow me to send key events in any way to the page it interacts with. This is bad because it basically means I can’t run any of our tests that rely on some sort of key press from the user. On top of that there doesn’t appear to be anything available to test dragging and dropping of elements on a page. Once again, this is also a problem because we wouldn’t be able to test moving things like track events, track containers, track handles, events onto a track and anything else I haven’t included.

Thusfar I have only been able to have these run on the command line. I’m sure there is some sort of way that to integrate all of this together, so I went and asked their google group to see if someone more knowledgeable can assist me. In the mean time, here is some of what I have come up with thusfar:


casper.test.comment( "Testing iFrame Dialog Closing with Esc Key - Popcorn Maker" );

var p = require("webpage").create,
    t = Date.now();

casper.start( "http://localhost:8888/templates/test.html", function(){

  t = Date.now() - t;

  console.log( "Loading Time: " + t + "msec" );

  // Ensure template has loaded
  this.waitUntilVisible( "#butter-header-new", function(){

    // Confirm button exists
    this.test.assertExists( "#butter-header-new", "The button did load" );

    // Move mouse over button
    this.mouse.move( "#butter-header-new" );
    // Click zee button
    this.click( "#butter-header-new" );

  });

});

casper.then( function(){

  // Our iFrames don't have IDs, but all use the same class and this class is
  // only ever present in the DOM once
  this.test.assertExists( "iframe[class=' fade-in']", "The iFrame Loaded" );

});

// Run the whole test suite (all the above)
casper.run( function(){

  // Sends a test done message when all the of the above has finished running
  this.test.done();
  this.test.renderResults( true, 0, "iframe_esc_close.xml" );

});

This set of tests opens our testing template and then wait’s for our content to load. Once it has, it ensures an element with the id butter-header-new is present in the DOM. It then moves the mouse of that element, and clicks it. After the click has finished it will check if an iFrame with the class name fade-in is present. All of our iFrame dialogs use this class name and there will only ever be one iFrame dialog open at one time.

One of the interesting things that is run after all the tests are done is Casper.renderResults. What it does is send the results of our testing to the file I specified in an XUnit formatted XML file. From what I read we might be able to hook this into another continuous integration service such as Jenkins.

Here’s what running that code looks like:

What’s next?
Hopefully we can sort this out sooner rather than later. Once we can solve exactly what frameworks/technologies we will use to write out tests we can actually go about and well, write them! Hopefully my next post will include some of that 🙂

Until then!

My Experiences at Mozilla Hot Hacks: Two Awesome Days

This past weekend myself along with many of the Popcorn.js team in Toronto – Chris DeCairos, Bobby Richter, Dave Seifried, Jon Buckley, Scott Downe and Kate Hudson – were apart of the Mozilla event Hot Hacks, which is apart of the Living Docs Project from Mozilla. The idea behind Hot Hacks is bringing developers on the web together with documentary film makers to rapidly prototype web documentaries and make some awesome experiences.

Unlike my colleagues I was assigned the task of being a floater amongst all the projects. I went where help was needed and did what I could. This led me to helping three total projects over the two days: Following Wise Men, Turcot Interchange and Immigrant Nation. It was a unique experience because I got to involve myself with multiple filmmakers and experience their ideas on making their documentaries interactive on the web. Admittedly, some of the contributions were smaller than others but it was fun none the less.

These kinds of events are great because I feel they have helped me gain new ways of looking at solving problems. When you are presented with designing and then implementing something unique and interesting in such a short time frame you quickly learn how to work and accomplish goals like these in new ways just as unique as the projects themselves. At the same time I know I definitely couldn’t have taken on the role of being a main developer on one of these teams. I’m no where near the capability of pulling off some of the awesome stuff seen there during two days. I do hope to be there sometime soon. Maybe for the next one!

If you want to take a look at some of the demos, they are available at the links below:

Following Wise Men
Looking for Los Sures
The Message
Turcot Interchange

As a final note, I really want to thank Mozilla. It truly was and awesome experience and a really fun time. Can’t wait for the next!

Using find with sed to replace a whole buncha stuff!

So, taking what I’ve learned through open source it’s good to document things you have had difficulty with/learned. Not only can it potentially help others but chances are you probably will forget the exact specifics and you now have an easy to find solution to your problem!

My groups PRJ666 project is winding down to the final few polishing steps. One thing that has annoyed me is that all of our links have been prefaced with /PRJ666-Implementation/pages/ to prevent previous issues we were having before, which are now gone. I wanted to remove all of this nonsense because, in the off chance someone actually uses this project (Hah!) and they decide to change one element of the deployment (a highly likely move)…. KABOOM. Site’s broken.

I know there are a lot of powerful commands available to me using command line unix. I just didn’t really know how to combine them together.

I used a combination of find, sed and a bash script to help solve this problem. Here’s what the code looked like:

#!/bin/sh
for files in `find PRJ666-Implementation/web/pages/ -type f -print 2>/dev/null`
do
sed 's/\/PRJ666\-Implementation\/pages\//\.\.\//g' $files >$files.tmp
mv $files.tmp $files
done

This finds all files in a directory, including it’s subdirectories and searches for the string /PRJ666-Implementation/pages/ and removes it by replacing it with ../ .

Hope this helps someone out there like me. These things are definitely one of my many weaknesses and it’s something I’m looking to improve on.

OSD700 – 1.0 Release

Our final release of the course is ment to show not only all the work that we have accomplished but show that we can have something that we can easily show people. Something our in the world of open source that is visible to other people. While obviously all those more “minor” things we do along the way are just as important, if not more, they aren’t quite as noticeable.

For me this was implementing video Playback Jitter, one of many missing media statistics in Firefox. While I have some working at this point, it’s not in a passable state yet because I do not know how to fix one other area of the code.

Playback Jitter, as defined by the spec, is the intended duration of the current video frame spent on screen, minus the actual duration spent on screen. Some of this data (at least, what I think is the correct data) was already available to be inside VideoFrameContainer::SetCurrentFrame via the information passed in about the current frame with aTarget (the target time this frame would be presented) and mPaintTime/lastPaintTime(the time the last frame/current frame being displayed actually was presented). These two seemed like the right information I needed to calculate the second part there, with that being the time the frame actually spent on screen.


if (!lastPaintTime.IsNull() && !aTargetTime.IsNull()) {
mPlaybackJitter += (aIntendedDuration - (aTargetTime - lastPaintTime).ToMilliseconds());
}

The key part above was that I needed to introduce a way to this method to know the actual time the frame was supposed to spend on the screen. I looked around at what was calling this and found nsBuiltinDecoderStateMachine::RenderVideoFrame which just so happens to have access to a class called VideoData which contains information about the start and end time of the current frame to be rendered.

From there I added a new argument to the SetCurrentFrame method that would contain this information for me:

void SetCurrentFrame(const gfxIntSize& aIntrinsicSize, Image* aImage,
                     TimeStamp aTargetTime,
                     PRInt64 aIntendedDuration);

And with the appropriate linkage through the use if IDLs and making getters available I can now access this information on my own Firefox builds. I’m unsure how close this is to being right, but once someone has time to review it I’ll be fixing up my mistakes ASAP as this stuff is a lot of fun.

I’ve also taken the task of making the HTMLProgressElement no longer inherit from HTMLFormElement and instead inherits from HTMLGenericElement. This required removing an attribute, it’s getters/setters, as well as updating forward declarations of various methods and updating some tests. I liked this one because it was sort of a mini test of everything I have learned along the way. It allows me to step from the IDL level of things and make all the connections needed in the C++ land of the code.

I’ve enjoyed working on all this Firefox stuff because it really allows me to see the kind of thought process and work that goes into making something as simple as document.getElementById in javascript land actually works. I plan on continuing with the bugs I have but also work more with Core/DOM bugs as they have been very fun and a good learning process for me. This course has been a blast and I have loved every minute of it.

Pit Stop Number One in Open Source – The Start of the Second Leg

My how this has been quite the experience. I’m going to try and keep this structured but honestly I’ll probably head off on tangents every now and then.

This all started in September last year for me when I decided to take the first of two open source courses taught here at Seneca, both by David Humphrey. I took it based off the recommendation from my friend Dave Seifried as he enjoyed the course and was continuing to do work on it through the position he obtained with the Centre for Development of Open Technology, CDOT for short, at Seneca College @ York. I figured I’d give it a shot because a lot of the work he was doing was involved with the web and I was fairly confident that I wanted to push my programming career in that path when it was all said an done. Granted I had no idea about the specifics beyond just “Web Development” but I knew it was a start.

Oh yeah, I also happened to meet these buffoons one fateful night if you will at the end of August in 2011 who definitely helped give me more interest in taking the particular course: Jon Buckley, Chris DeCairos and Scott Downe.

I definitely didn’t come into the course knowing a lot about open source itself. I definitely had somewhat of an idea but it was mostly limited to “So you mean the software that’s free right?”. I had no clue what some of the truly deep philosophy behind it was before reading The Cathedral and the Bazaar by Eric S. Raymond or watching Revolution OS(Still a great watch BTW. Totally went back over it today, or is it yesterday…). Both of these really opened not only my eyes but my mind as to what an open source community was and the kinds of cool things people have done over the years.

From there I was quickly exposed to some of the basic principles of any well run open source project. I found a typo in a comment in one file I was looking over and sort of jokingly pointed it out in IRC. I was then told to file it, fix it and then submit it up for review. Next thing I knew I was being assigned to handle all the updates that were happening to this particular piece of code as it was in an area that was being focused on for that release and bam, I pretty much became the owner of that little plugin. Granted in the big scope of things it’s honestly rather small but that doesn’t change the fact that it was rather cool and exciting to say I have code that is out there in projects being used by REAL people compared to all the silly projects we do in the rest of our courses.

The important part behind all of this were the different real world practices I was learning. Some of it was simply better ways to code as up til this point I hadn’t been exposed to much javascript and that’s what I was primarily programming with for most of this course. Beyond just the raw coding skills I learned about the work that goes into making a good piece of code and getting it accepted. Part of this was learning to adapt to specific coding styles but a lot of it was learning about the review process that goes into any good project. It’s this kind of stuff that you can easily take to anything else you work on after this point and apply it to better all the code you write.

Blog! Blog! Blog!
This is definitely another thing that I have been exposed to because of the work I’ve done in these courses. Blogging is a great tool for an open source developer because it allows you to document the various work you are doing. You can write about the difficulties you have had and perhaps other people will notice it and try and help you out; or you could simply write about how you solved a problem or the steps you took to doing something and then easily reference it later for your own use or to share with others. I’ve done it many times myself when needing to figure out how to do something again, such as pushing patches for FireFox bugs up to the try server.

OSD700 – The No Failing allowed version of Open Source
What I mean by that is before it was quite alright if we didn’t actually land anything as it was all still a learning process (and really still is). The idea now was that we weren’t “open source babies” anymore and we need to put our own stamp on the web in some form. For most of us this was through working on Firefox bugs and I was not an exception here.

I could easily list off the exact bugs I have worked on, what’s landed and whatnot but that’s not the point here. The point is pushing ourselves harder in this course has allowed me to learn and do so much more than I have in any other course. I went from contributing to an open source library to an open source mega project. The sheer size of the code base and complexity of the stuff you see with FireFox is astounding. No one man/woman can ever understand it all. Such a person doesn’t exist.

The big thing here though was the amount of confidence it gives each and everyone of us. I’m no C++ guru but working with this kind of code helped me sharpen my own skills and learn about how they are able to connect the side of the browser I was used to with Javascript all the way down to the core operations with your OS. I had never even heard of IDL files before this but now I know what they are, what they do and how to work with them in your code. Without doing this kind of work I would never have the confidence to go and actually report bugs to big projects like these, especially fairly significant bugs (at least, in my opinion) like what I threw over at Google!

These courses have really shaped me in ways that are hard to imagine. They have given me the opportunity to work with real projects and the ability to say that I have contributed to these real world projects that thousands or even millions of people use! They have given me the opportunity to strengthen my theoretical knowledge in the languages that I have worked with and exposed me to real world implementations using these languages, showing me more efficient ways of doing some of the things that I have learned in my previous courses. On top of all that I have gotten the opportunity to meet some cool people who work on these various projects and work along side of them.

I still have a lot to learn. Hell, I’ve only actually been programming for about three years now and working with these projects have been real eye openers to me, but in a good way. An awesome way. This may be the first pit stop in my little adventure here but there’s a lot of big things coming and I can’t wait. If you are a student at the college and still have the opportunity to take these courses; Do it. You won’t regret it.

Till next time!

OSD700 – Release 0.9

The journey is nearly complete and more work has been done!

I figured out why I broke video playback in the original demo I tried to show the linkage of my mozPlaybackJitter statistic. I was originally trying to also subtract the frameDelay from my calculation at the time. The problem is when it’s going through the first time there literally is no value for this yet. I’m fairly certain this is what the cause of the problem was.

This then brings me to the issue of solving the second part of the equation. I am fairly confident that Ei (Desired duration of frame i spent on the screen) was being calculated correctly as this information was available to me in the VideoData argument of nsBuiltInDecoderStateMachine::RenderVideoFrame, from there simply passing that into VideoFrameContainer::SetCurrentFrame. The second part, Ai (Actual duration frame i spent on the screen) is a little bit trickier. Eventually after looking around it a little bit more the thought of using the passed in argument aTargetTime of SetCurrentFrame and subtracting mPaintTime in ImageContainer from it. In my head at least this value should represent the actual time the frame spent on screen.

The next part of the problem is converting it all down to one type. As it stands I haven’t been able to figure this one out. The information I get for Ei is of PRInt64 type. The target time and paint time of the last image/video frame are both of TimeStamp type. From everything I can tell there doesn’t appear to be a way for me to convert these into some sort of similar type (preferably TimeDuration, as this would allow me to convert it to various measurements in seconds).

The final part of the problem is the fact that most of the people who would be able to assist me with this area of the code all live in/around Australia making for a massive time difference. Oh yeah, it’s a long weekend right now to boot!

I also took on Bug 686913 as a quick addition. Also to help get back into things as I had to take some time off at various points. This one isn’t too difficult but it’s different at the same time because I have never had to deal with changing the class entirely that one inherits from. The main difference here is dealing with all those NS_FORWARD declarations. I pushed my patch to try and there’s probably more fails than what would normally be desired.

I also spent some time looking into Bug 723020 which involves adding column information to window.onerror allowing for more detailed error information for developers. The problem I’m left with here is the W3 bug linked doesn’t, from what I can tell, have any information on where this spec is now. I wanted to go read up on it as it’s definitely interesting but I’m stuck there.

For our final release I hope to have my video statistic implemented and finished. Perhaps even staged. Along with that I want to be able to finally get some work done on this new one as well as it’s very interesting to me.

Till then!

Implementing Video Playback Statistics in Firefox – Part II

So, after my demo horribly failed during my presentation last week it was clear that my initial attempt at just making some sort of data about the video being accessible through my mozPlaybackJitter attribute didn’t work at all. The video would never load and trying to play it would freeze my browser.

Awesome.

I hate it when mistakes aren’t at compile time…

Been going over my code and honestly, being a rather inexperienced programmer, this is a rather futile task. I haven’t been able to nor do I feel I’ll be able to fix it this way. I’m thinking it’s time to pull out the o’ll GDB.

More soon!

OSD700 – 0.8 Release

For this release, and the remaining of the semester, I have and will be working on implementing the missing video statistics with my colleague Dave Seifried. For starters I have been attacking the playbackJitter statistic and let me tell you, for someone who has never worked with this area of the code before it has been quite challenging figuring out how things work on this side of things. Perhaps I shouldn’t have jumped over to this stuff this late in the semester but then again it’s been quite the change of pace.

To be brief, playbackJitter is used to obtain an overall metric for perceived playback quality and smoothness of the video up to the current point when the value was been retrieved. From what I gather, it’s something that is constantly calculated along the way so where I wind up placing the code to calculate this is going to be key.

By now I know how to easily update the IDL files and add getters/setters for the attributes (getter only in this case). The problem becomes figuring out where the information I need to manipulate is and appropriately giving myself a path to access that information. At this point I’m relatively comfortable with speaking with some of these immensely smart people who work with Mozilla so I went about talking to Chris Pearce about the kinds of things I should be looking at for this.

He first pointed me in the direction of looking at how MozFrameDelay was handled in the nsDOMHTMLVideoElement.cpp class. In this case the information is handled in VideoImageContainer::SetCurrentFrame() by setting the mPaintDelay member variable here each time SetCurrentFrame() is called and then grabbing it’s value in seconds. In the end a VideoFrame is just an image so it actually makes sense in the end. VideoImageContainer::SetCurrentFrame() itself is actually set everytime a current frame is true (in this case I’m assuming it means that were was a frame successfully decoded) in nsBuiltinDecoderStateMachine::AdvanceFrame() which calls nsBuiltinDecoderStateMachine::RenderVideoFrame().

So I can see how that all links together right now which makes a lot of sense to me.

This then has lead me to look at all the different bits of data available to me within VideoImageContainer. FrameDelay I feel definitely is part of it, but I at this point I don’t think with the code that is available there that I could actually calculate playbackJitter. This leads me to a suggestion cpearce gave to me that I should add a new parameter to SetCurrentFrame that will contain the data about the actual intended time the frame spent on the screen. This data could be easily grabbed from the aData argument of nsBuiltinDecoderStateMachine::RenderVideoFrame().

I now have a good direction to aim for and have a concrete idea of what I actually need to do. Stay tuned for tomorrow as I’ll actually be able to implement something!

WWWYKI