Mozilla DevCenter
oreilly.comSafari Books Online.Conferences.

Sponsored Developer Resources

Atom 1.0 Feed RSS 1.0 Feed RSS 2.0 Feed

Related O'Reilly Books

What Is Firefox What Is Firefox
Brian King provides a brief look at Firefox's origins and evolution, and then dives into its support for web standards like CSS and XML, its debugging and extension capabilities, and some cool new features in the upcoming 1.5 release. If you're considering a switch to Firefox, this article may help make the decision for you.

Mozilla as a Development Platform: An Interview with Axel Hecht  Axel Hecht is a member of Mozilla Europe's board of directors, and a major contributor to the Mozilla project. At O'Reilly's European Open Source Convention (October 17-20), Dr. Hecht will be talking about Mozilla as a development platform. O'Reilly Network interviewed Dr. Hecht to find out if the long-held dream of Mozilla as a development platform was about to come true.   [O'Reilly Network]

A Firefox Glossary  Brian King, with some help from Nigel McFarlane, covers everything from about:config to "zool" in this fun, fact-filled Firefox glossary. It's by no means exhaustive, but you'll find references to specific chapters or hacks throughout the glossary to Nigel's book, Firefox Hacks. When you're ready to dig deeper, check out his book.   [O'Reilly Network]

Important Notice for Mozilla DevCenter Readers About O'Reilly RSS and Atom Feeds  O'Reilly Media, Inc. is rolling out a new syndication mechanism that provides greater control over the content we publish online. Here's information to help you update your existing RSS and Atom feeds to O'Reilly content.  [Mozilla DevCenter]

Hacking Firefox  This excerpt from Firefox Hacks shows you how to use overlays (essentially hunks of UI data) to make something you want to appear in the Firefox default application, perhaps to carry out a particular function of your extension. For example, you might want to add a menu item to the Tools menu to launch your extension. Overlays allow existing Firefox GUIs to be enhanced.   [O'Reilly Network]

Mozile: What You See is What You Edit  Most modern browsers don't allow you to hit "edit" and manipulate content as easily as you view it, WYSIWYG-style. Mozile, which stands for Mozilla Inline Editor, is a new Mozilla plug-in for in-browser editing. This article by Conor Dowling provides an overview of Mozile and what in-browser editing means.
  [ Mozilla DevCenter]

The Future of Mozilla Application Development  Recently, announced a major update to its development roadmap. Some of the changes in the new document represent a fundamental shift in the direction and goals of the Mozilla community. In this article, David Boswell and Brian King analyze the new roadmap, and demonstrate how to convert an existing XPFE-based application into an application that uses the new XUL toolkit. David and Brian are the authors of O'Reilly's Creating Applications with Mozilla.   [Mozilla DevCenter]

Remote Application Development with Mozilla, Part 2  In their first article, Brian King, coauthor of Creating Applications with Mozilla, and Myk Melez looked at the benefits of remote application development using Mozilla technologies such as XUL and web services support. In this article, they present a case study of one such application, the Mozilla Amazon Browser, a tool for searching Amazon's catalogs.   [Mozilla DevCenter]

Remote Application Development with Mozilla  This article explores the uses for remote XUL (loaded from a Web server), contrasts its capabilities with those of local XUL (installed on a user's computer), explains how to deploy remote XUL, and gives examples of existing applications.   [Mozilla DevCenter] Made Easy  Now that is about to release Mozilla 1.2 and Netscape has come out with the latest version of their own Mozilla-based browser, Netscape 7, this is a great time to see what other people are building with Mozilla's cross-platform development framework. Here's a little history about, and a roadmap to,   [Mozilla DevCenter]

XML Transformations with CSS and DOM  Mozilla permits XML to be rendered in the browser with CSS and manipulated with DOM. If you're already familiar with CSS and DOM, you're more than halfway to achieving XML transformations in Mozilla. This article demonstrates how to render XML in the browser with a minimum of CSS and JavaScript.   [Mozilla DevCenter]

Roll Your Own Browser  Here's a look at using the Mozilla toolkit to customize, or even create your own browser.   [Mozilla DevCenter]

Let One Hundred Browsers Bloom  In this article, David Boswell, coauthor of Creating Applications with Mozilla surveys some of the more interesting, and useful, Mozilla-based browsers available now.   [Mozilla DevCenter]

Using the Mozilla SOAP API  With the release of Mozilla 1.0, the world now has a browser that supports SOAP natively. This article shows you how Web applications running in Mozilla can now make SOAP calls directly from the client without requiring a browser refresh or additional calls to the server.   [Web Development DevCenter]

Today's News
December 20, 2012

Byron Jones: [a-team] conversations with face

face is an irc bot i run on the #ateam channel.
its responses are purely random, drawing from past conversations on channel.

<jedp> hello ateam, is there anyone there at this hour?
<face> jedp, greetings
<jedp> face hola1
* face gets a cup of tea
<jedp> face I want to send an observer message with marionette. Can it be done?
* face swoons
<jedp> face likes tea
* face can certainly file one if you want. It’s whatever 12 are in the first case we built
<jedp> face – or if marionette doesn’t have a way to send observer messages, maybe with SpecialPowers?
* face feels loved
<jedp> face I tried SpecialPowers.sendSyncMessage, thinking that would be there from this, but no dice:
* face goes afk
<glob> jedp, sorry, face is actually a bot :|
<jedp> glob i was just coming to that conclusion

Filed under: face, mozilla [Source: Planet Mozilla]

François Marier: Keeping GMail in a separate browser profile

I wanted to be able to use the GMail web interface on my work machine, but for privacy reasons, I prefer not to be logged into my Google Account on my main browser.

Here's how I make use of a somewhat hidden Firefox feature to move GMail to a separate browser profile.

Creating a separate profile

The idea behing browser profiles is simple: each profile has separate history, settings, bookmarks, cookies, etc.

To create a new one, simply start Firefox with this option:

firefox -ProfileManager

to display a dialog which allows you to create new profiles:

Once you've created a new "GMail" profile, you can start it up from the profile manager or directly from the command-line:

firefox -no-remote -P GMail

(The -no-remote option ensures that a new browser process is created for it.)

To make this easier, I put the command above in a tiny gmail shell script that lives in my ~/bin/ directory. I can use it to start my "GMail browser" by simply typing gmail.

Tuning privacy settings for 2-step authentication

While I initially kept that browser profile in private browsing mode, this was forcing me to enter my 2-factor authentication credentials every time I started the browser. So to avoid having to use Google Authenticator (or its Firefox OS cousin) every day, I ended up switching to custom privacy settings and enabling all cookies:

It turns out however that there is a Firefox extension which can selectively delete unwanted cookies while keeping useful ones.

Once that add-on is installed and the browser restarted, simply add to the whitelist and set it to clear cookies when the browser is closed:

Then log into GMail and tick the "Trust this computer" checkbox at the 2-factor prompt:

With these settings, your browsing history will be cleared and you will be logged out of GMail every time you close your browser but will still be able to skip the 2-factor step on that device.

[Source: Planet Mozilla]

Byron Jones: happy bmo push day!

the following changes have been pushed to

  • [820226] product/component line in email notifications a bit confusing
  • [822547] should clear the request cache before sending each mail

Filed under: bmo, mozilla [Source: Planet Mozilla]

Jess Klein: Evolving Webmaker in 2013

"Allow events to change you. You have to be willing to grow. Growth is different from something that happens to you. You produce it. You live it. The prerequisites for growth: the openness to experience events and the willingness to be changed by them." - Bruce Mau from Incomplete Manifesto for Growth

2012 was a full year for us at the Mozilla Foundation - we released Thimble, a universal navigation MVP, soft launched badges, refined and launched Popcorn 1.0 and became an active voice for Web Literacy values in the community. So where does that leave us for 2013? As Bruce Mau's beautiful sentiment here reflects, now is the time for us to live in the moment and to be strong enough to allow ourselves to be changed or affected - by users, by people who do unexpectedly amazing things with our tools, and by people who are standing next to us in the crowd and trying to get their own voice heard.

2013 is full of a lot of opportunity.  For Webmaker that means evolving our design to be a real - world learning experience. By that, I mean that we need to focus on collaborative webmaking.  At our hack jams, community calls and festivals we have seen the value and success of peer to peer learning and we need to leverage the experiences of those events and have them inform our design.

We have laid the ground work for this in 2012 by introducing badges into our ecosystem. Badges are naturally equated with skills, however they need to have a level of social value in an ecosystem to have any meaning.  That's the problem- badges for the sake of badges leave you feeling pretty flat, so you need the entire ecosystem to live and breathe as a social, collaborative environment where you have supportive peers who are invested in the skills, but also - importantly - what it takes to earn those skills. That's our mission: create an environment where making = learning. Concretely that means that our website is going to shift from a site to a platform and our suite of tools are going to evolve into a more unified experience.

Over the past few weeks, a bunch of us on the Webmaker team took some time to do the blue sky thinking that needs to happen to take a design concept to the next level. We came up with some ideas and prototypes and I am going to share a few here.  Each prototype represents a concept that we want to explore in the New Year.

1. Your Creative Cloud - We want to make it easy to collect, share and remix content from your world - and this means everywhere that you are going- whether that be collecting content from your mobile device or browser and adding that directly to a project to remix on Webmaker . Maybe in the future that means remixing Webmaker content that you have clipped or saved from your cloud on to some sort of media clipping gallery -- remixing that on the web in a more distributed manner.  Click here to see the enlarged mockup. 
We feel that this is one area that Mozilla can lead as we are uniquely suited to be effective here because of the precedent of bookmarking in browser in Firefox as well as with our work directly around identity in conjunction with Persona. To be clear, we don't need or want to lock you in to a uniquely browser based experience - but the opportunities for this association and leveraging of the Mozilla brand and design values are quite large. We know this is a win for us, because our colleagues over at Firefox are in- line with our thinking and we are starting discuss opportunities for us to collaborate on this piece of the puzzle.


2. Collaboration as learning is a big theme for us, not just in the mockup I am showing but it is going to be a consistent and fluid theme within our tools and platform.  In the mockup above you are looking at system for using to revision history as a way to document process. Here, just like in etherpad, you are able to record your Webmaking sessions and replay them at different points in your timeline. You can see the work that you have done and how you constructed a project,  as well as how a peer who is helping to hack your project made the changes that she did. This allows for asymmetrical as well as symetrical collaboration experiences.

3. Making our tools speak the same language. We are trying to make the experience for an average user easier- this means putting some controls in place so that the user starts to recognize conventions and see how the Webmaker properties relate to eachother.  As you can see in this mockup, I am starting to experiment with building a more unified experience- which includes: having a unified (and single) login for Webmaker, building out common terminology, controls and User Interface - and incorporating the tools into the same experience. To be clear- this does not mean that Thimble goes away, it just means that we figure out a way for our tools to speak more clearly to the end user.

4.  Creating opportunitites for Community of Craft - working with peers and mentors to build the web in a social, real way. I demo-ed this prototype at the Webmaker community call and Bobby Richter helped me present it:

Imagine that you logged into and landed on the above gallery page.The view that you are looking at- is as if you have followed several other Webmaker users who have posted projects that they made as well as "themes" that you might have followed- for example- video projects or projects about activism. Any of these projects are remixable. You also are seeing badge graphics which, when you click on them, open up to a sub-gallery of sets of projects connected to various skills. For the moment, imagine that you are Secretrobotron going on to this page, you see a cool project by your friend and you click the green remix button which opens it directly in the editor. 

These are examples of some of the things that we have been thinking about. And this is exactly how we want to be working--- putting out crazy ideas, testing them out, seeing what sticks and iterating. Shout outs to Chris, Kate, Bobby, Atul , Brett, Chris, Erin and everyone who has been on Webmaker community calls in 2013. As I said earlier, this is an evolution of our thinking so all these ideas came out of lots of ground work, user testing , hive pop ups and festivals.

That last prototype was built in a period of 48 hours by Bobby, myself and Chris Appleton- so if all it takes is 48 hours to get some collaborative Webmaking action - imagine what we can make happen in 2013!

Reference: Check out this great post by Bobby on the prototype implementation.

[Source: Planet Mozilla]

Chris Pearce: Experimental H.264,AAC, and MP3 support in Firefox Nightly builds on Windows 7 and later

As the Internet has already discovered, recently I landed patches to add a Windows Media Foundation playback backend for Firefox. This is preff'd off by default.

This allows playback of H.264 video and AAC audio in MP4 and M4A files, and MP3 audio files in HTML5 <audio> and <video> elements in Firefox on Windows 7 and later.

To test MP4/MP3 playback, download the latest Firefox Nightly Build, and toggle the pref "" to "true" in about:config.

There are a few bugs I'm aware, which is why this landed preff'd off by default, but if you spot any bugs, please file a bug in Firefox's "Core :: Video/Audio" component.
[Source: Planet Mozilla]

Michael Verdi: User Education

When I started working on our support documentation back in 2010, our users found it helpful about 50% of the time. So we went to work on creating a better manual. That involved a lot of things including changing they way we wrote and the way we organized things. Today users say our articles are helpful 75% of the time.

That’s a pretty great improvement (we think we can do even better) but one thing I noticed was that there was another important factor at work – where and when someone is pointed to an article. By far, the biggest spikes in our helpfulness rating come when someone points a reader to one of our articles. When you already have a person engaged in a topic and then say, “you should look at this because it will help” they not only do, they often find those articles helpful 90% or more of the time. These are classic teachable moments and I think it’s incredibly important to make use of them whenever possible.

Here are two examples of things that people hadn’t gone out of their way to learn about but when they were pointed to the articles in another context, they responded enthusastically. Back in February, The Den blog pointed to an article about choosing passwords. Now people don’t really ever look for this article on our support site. But when they read about it in this one blog post, 88,000 people clicked though and rated it helpful 95% of the time.

And more recently, Facebook started linking Firefox 3.6 users to this article in an effort to get them to upgrade. Over the last two months more than 1.1 million people have visited and rated that article helpful 95% of the time. We’ve also seen this kind of response when linking to articles about new features on the page that Firefox shows you after updating.

It’s important to have a great user manual. Kathy Sierra made the point over and over again that the way to create passionate users is to teach them how to kick ass. And, especially for something like a web browser that people expect to open up and have it just work, it’s critical to incorporate that teaching (in the browser or externally) in the right context – at the moment someone wants or needs it. That’s a much better experience than stopping what you are doing and trying to sift though an entire internet full of information. Who has time for that?

This is something that I’m extremely excited to be working on over the next year as part of the Support Team’s goal of creating an amazing support experience for all of our products.

[Source: Planet Mozilla]

Kim Moir: Releng 2013 Call for Papers A solid build, test, packaging and deployment story is crucial to the success of a software project.  As John O'Duinn  says, "release engineers have a multiplier effect".  In other words, a release engineer can implement automation improvements that make every other developer more productive.  Traditionally, release engineering hasn't been a common area of academic research.  However,  this is starting to change.  Another challenge is that there typically isn't a lot of communication between academic researchers and release engineers.  The aim of the Releng 2013 workshop on May 20, 2013 in San Francisco is to bring together those two communities together: people practicing release engineering and the academic researchers studying it.  It will be co-located with ICSE 2013, which is the largest academic software engineering conference. 

Image ©thomashawk, licensed under Creative Commons by-nc-sa 2.0

Are you a release engineer who'd like to discuss the challenges you face and share your experiences with others? Or are you an academic looking to expand the audience for your research and discover new problems to analyze? If so, we encourage you to to submit a paper or a talk and attend the workshop.  Or if you just want to hear some great war stories, in both open source and commercial environments, this would be a fantastic place to learn.  More details are on the web site.  You can also follow us on twitter or Facebook.  We look forward to seeing you in San Francisco!

Greg Wilson's article on the two solitudes (industry and academia) is an interesting read and underscores the importance for more interaction between these two communities
[Source: Planet Mozilla]

David Walsh: Introduction to dcl

I’m incredibly honored to have Eugene Lazutkin author for David Walsh Blog. Eugene has written much of the vector graphic code for the Dojo Toolkit’s dojox/gfx (and subsequent charting and drawing resources) library, a library I consider to be mind-blowingly awesome. Eugene chose to write about dcl, an ultra-flexible, tiny OOP JS library.

dcl is a minimalistic yet complete JavaScript package for node.js and modern browsers. It implements OOP with mixins + AOP at both “class” and object level, and works in strict and non-strict modes.

The simplest way to learn something is to dive right in. Let’s implement a simple widget based on reactive templating: when we change parameters of a widget, they are immeditely reflected in a web page.

Assuming that we run our code using AMD format in browser, our “code shell” will look like that:

  ["dcl", "dcl/bases/Mixer", "dcl/mixins/Cleanup", "dcl/advices/memoize"],
  function(dcl, Mixer, Cleanup, memoize){
    // our code goes here

As the first step let’s code our data model:

var Data = dcl(Mixer, {
  declaredClass: "Data",
  updateData: function(data){
    dcl.mix(this, data);

We derived our class using single inheritance from Mixer, which comes with dcl. Mixer is a very simple base. All it does is it copies properties of the first constructor argument to an instance.

Obviously in this simple example we could just call updateData() from our constructor, but let’s assume that a constructor and an updater can do (slightly) different things and we want to keep them separately.

declaredClass is completely optional, yet recommended to be specified (any unique human-readable name is fine), because it is used by debugging helpers included with `dcl`.

Now let’s code our nano-sized template engine, which substitutes strings like this: ${abc} with properties taken directly from an instance ( in this case). Something like that:

var Template = dcl(null, {
  declaredClass: "Template",
  render: function(templateName){
    var self = this;
    return this[templateName].replace(/\$\{([^\}]+)\}/g, function(_, prop){
      return self[prop];

We specify what template to use by name, which is a property name on an object instance, and it fills out a template string using properties specified on an object.

This is another demonstration of single inheritance: our Template is based on a plain vanilla Object, like any JavaScript’s object, which is indicated by using null as a base.

What else do we need? We need a way to manage our DOM node:

var Node = dcl([Mixer, Cleanup], {
  show: function(text){
      this.node.innerHTML = text;
  destroy: function(){
      this.node.innerHTML = "";

The code above provides a way to show some HTML, and clears out its presentation when we destroy() a widget.

It uses two bases: already mentioned Mixer is used to get a property in during initialization (node in this case), and Cleanup, which again comes with dcl. The latter chains all destroy() methods together and provides a simple foundation for clean up management, so all resources can be properly disposed of.

What we did up to this point is we came up with very small manageable orthogonal components, which reflect different sides of our widget, and can be combined together in different configurations. Let’s put them all together now:

var NameWidget0 = dcl([Data, Template, Node], {
  declaredClass: "NameWidget0",
  template: "Hello, ${firstName} ${lastName}!"

var x = new NameWidget0({
  node:      document.getElementById("name"),
  firstName: "Bob",
  lastName:  "Smith"
});"template")); // Hello, Bob Smith!
x.updateData({firstName: "Jill"});"template")); // Hello, Jill Smith!

It works, but it is not very coherent, and way too verbose. Don’t worry, we will fix it soon.

Some readers probably noticed that we have three bases now: Data, Template, and Node, and two of them (Data, and Node) are based on Mixer. How does it work? It works fine, because underneath dcl uses C3 superclass linearization algorithm (the same one used by Python), which removes duplicates, and sorts bases to ensure that their requested order is correct. In this case a single copy of Mixin should go before both Data and Node. Read more on that topic in dcl() documentation.

Now let’s address deficiencies of our implementation #0:

  • As soon as a widget is constructed, we should show text.
  • As soon as data is updated, we should show text.

Both requirements are simple and seem to call for good old-fashioned supercalls:

var NameWidget1 = dcl([Data, Template, Node], {
  declaredClass: "NameWidget1",
  template: "Hello, ${firstName} ${lastName}!",
  constructor: function(){
  updateData: dcl.superCall(function(sup){
    return function(){
      sup.apply(this, arguments);
  showData: function(){
    var text = this.render("template");;

var x = new NameWidget1({
  node:      document.getElementById("name"),
  firstName: "Bob",
  lastName:  "Smith"
// Hello, Bob Smith!

x.updateData({firstName: "Jill"}); // Hello, Jill Smith!

Much better!

Let’s take a look at two new things: constructor and a supercall. Both are supposed to be supercalls, yet look differently. For example, constructor doesn’t call its super method. Why? Because dcl chains constructors automatically.

updateData() is straightforward: it calls a super first, then a method to update a visual. But it is declared using a double function pattern. Why? For two reasons: run-time efficience, and ease of debugging. Read all about it in dcl.superCall() documentation, and Supercalls in JS.

While this implementation looks fine, it is far from “fine”. Let’s be smart and look forward: in real life our implementation will be modified and augmented by generations of developers. Some will try to build on top of it.

  • Our call to showData() in construct is not going to be the last code executed, as we expected. Constructors of derived classes will be called after it.
  • updateData() will be overwritten, and some programmers may forget to call a super. Again, they may update data in their code after our code called showData() resulting in stale data shown.

Obviously we can write lengthy comments documenting our “implementation decisions”, and suggesting future programmers ways to do it right, but who reads docs and comments especially when writing “industrial” code in a crunch time?

It would be nice to solve those problems in a clean elegant way. Is it even possible? Of course. That’s why we have AOP.

Let’s rewrite our attempt #1:

var NameWidget2 = dcl([Data, Template, Node], {
  declaredClass: "NameWidget2",
  template: "Hello, ${firstName} ${lastName}!",
  constructor: dcl.after(function(){
  updateData: dcl.after(function(){
  showData: function(){
    var text = this.render("template");;

var x = new NameWidget2({
  node:      document.getElementById("name"),
  firstName: "Bob",
  lastName:  "Smith"
// Hello, Bob Smith!

x.updateData({firstName: "Jill"}); // Hello, Jill Smith!

Not only we got a (slightly) smaller code, now we are guaranteed, that showData() is called after all possible constructors, and after every invokation of updateData(), which can be completely replaced with code that may use supercalls. We don’t really care — we just specified code, which will be executed *after* whatever was put there by other programmers.

Now imagine that our user wants to click on a name, and get a pop-up with more detailed information, e.g., an HR record of that person. It would make sense to keep the information in one place, yet render it differently. And we already have a provision for that: we can add another template property, and call render() with its name:

var PersonWidget1 = dcl(NameWidget2, {
  declaredClass: "PersonWidget1",
  detailedTemplate: "..."

var x = new PersonWidget1({
  node:      document.getElementById("name"),
  firstName: "Bob",
  lastName:  "Smith",
  position:  "Programmer",
  hired:     new Date(2012, 0, 1) // 1/1/2012
// Hello, Bob Smith!

var detailed = x.render("detailedTemplate");

In the example above I skipped the definition of a detailed template for brevity. But you can see that we can add more information about person, and we can define different templates when a need arises.

Imagine that we profiled our new implementation and it turned out that we call render() method directly and indirectly very frequently, and it introduces some measurable delays. We can pre-render a template eagerly on every data update, yet it sounds like a lot of work for several complex templates, and some of them are not even going to be used. Better solution is to implement some kind of lazy caching: we will invalidate cache on every update, yet we will build a string only when requested.

Obviously such changes involve both Data and Template. Or it can be done downstream in NameWidget or PersonWidget. Now look above and please refrain from doing those changes: so far we tried to keep our “classes” orthogonal, and caching is clearly an orthogonal business.

dcl already provides a simple solution: memoize advice. Let’s use it in our example:

var PersonWidget2 = dcl(NameWidget2, {
  declaredClass: "PersonWidget2",
  detailedTemplate: "...",
  // memoization section:
  render:     dcl.advise(memoize.advice("render")),
  updateData: dcl.advise(memoize.guard ("render"))

With these two lines added our render() result is cached for every first parameter value (“template” or “detailedTemplate” in our case), and the cache will be invalidated every time we call updateData().

In this article we presented dcl package. If you plan to use it in your Node.js project install it like this:

npm install dcl

For your browser-based projects I suggest to use volo.js:

volo install uhop/dcl

The code is an open source on under New BSD and AFL v2 licenses.

This article didn’t cover a lot of other things provided by dcl:

  • Avoid the double function pattern in your legacy projects using inherited() supercalls.
  • Use AOP on object-level — add and remove advices dynamically in any order.
  • Specify “before” and “after” automatic chaining for any method.
  • Use debug helpers that come with dcl.
  • Leverage a small library of canned advices and mixins provided by dcl.

If you want to learn more about it, or just curious, you can find a lot of information in the documentation.

Happy DRY coding!

Read the full article at: Introduction to dcl

Hosted by Media Temple, domain from

[Source: Planet Mozilla]

Greg Wilson: Internet Humor from my Mum

[Source: Planet Mozilla]

Brandon Savage: “Do This, Not That” Now Available! The long wait is over! Do This, Not That: Object Oriented PHP is now available! If you’ve ever had to rewrite code that didn’t pass code review, this book is for you. If you’ve ever wondered how to improve your PHP development skills, this book is for you. This book is for everybody who ever [...] [Source: Planet Mozilla]

Lukas Blakk: Release Management gets an Intern!

Thanks to the GNOME Outreach Program for Women, we’ve got ourselves an awesome January intern who will be doing her first Open Source contributions all the way from Australia.

Lianne Lee stood out as the strongest of several applicants to the Release Metrics Dashboard, which was one of the two Mozilla projects that Selena Deckelmann and I threw together in order to try and lure people to our devious schemes for moar metrics. Lianne’s application was thorough and used all the technologies I wanted our intern to have familiarity with (python, git, javascript, creating data visualizations)

Firefox 17 triage over 6 weeks

She did a great job of showing one of the things release managers do over the six weeks a Firefox version is in Beta. The spikes in the above graph align with our constant triaging of tracking-firefox17? flags and how the number of bugs flagged for tracking decreases after the first few betas have shipped. When we get to beta 4 we’re starting to get more reserved about what we’re willing to track (it usually has to be pretty critical, or a low-risk fix to a many-user-facing issue).

Firefox 17 Tracked Bugs

This next graph shows us what we already know – but it’s very nice to SEE: our bugs tracked for a particular release continually go down over time, gradually.  Remember, this is while new bugs are being added to tracking regularly, so the fact that the trend keeps going down helps us know we are staying on top of our work and that engineers are continuing to fix tracked bugs as we close in on a 6 week ship date.

Now that we know Lianne has got what it takes, we’re going to set her on a more ambitious project – to create an engineering dashboard both for individuals and for teams, that would give them this sort of info on demand.  Want to see where you’re at (or where your team is at) on a particular version?  The engineering dashboard could show you in priority sequence what should be top on your list and also what bugs your team has unassigned that are tracked and should be assigned pronto (or communicate to RelMan that the bug should not be tracked).

This will be a huge improvement over email nagging (don’t worry, that’s still going to be around for many more months) because it will give us some quick, visual cues about how we’re doing with Firefox priorities and then we can also keep these measurements over time to compare release-to-release what the temperature of a particular version was. We hope this will allow us to keep fine tuning and working towards more stable beta cycles as we move forward.

Lianne will be with us from January 2 to April 2, 2013 and in her first week she’ll be evaluating a bunch of existing dashboards we know about to see what the pros and cons of each are and to do reconn on the technologies and visualizations people use.  We’ll use that to help us develop the v1.0 of this project’s deliverable and make sure it’s left in a state that RelMan intern 2.0 can pick up next summer.

Please comment if you have dashboards you like, you loathe, or you just want us to know about.

[Source: Planet Mozilla]

Crystal Beasley: People DO Care About Little “p” Privacy

There’s a common perception that people don’t care about privacy. This is sort of right. Our research shows that most users are unaware of how extensively they’re being tracked by advertisers across the websites they visit. Even those that do know are unsure of what to do about it. They know there’s a horrible legalese privacy policy hanging out somewhere. They know they should read it. They don’t. They know they should use different passwords on each site. They don’t.

Our research found a fatalistic attitude towards privacy and security. We heard everything from “No one wants my identity anyway. There’s no money in my bank account.*” to “If a hacker wanted to get in, I’m sure they could. I wouldn’t even know where to begin to defend myself.” *Note: that’s not how identity theft works.

The kind of privacy they care about is from the people closest to them that could physically pick up their phone or computer. This is why people clear browser history and explicitly log out of sites. It’s not the hacker they’re worried about, it’s their visiting mother-in-law or kid sister getting a look at their email or being able to vandalize their Facebook wall. Unlike the big “P” privacy, this consequences of this threat are immediate and visceral. The potential costs to their reputation are perceived to be higher than anything a hacker could or would do.

Further, we found that the existing password manager is under serving the majority of our users. Many people aren’t having the browser save their passwords because it leaves their account wide open to anyone who has physical access to their device. It doesn’t defend against the primary threat they’re worried about. They are left with no tools to help.

It’s little wonder we see such bad statistics on password reuse. We’ve told people not reuse passwords, but it’s a cognitive impossibility to comply. It’s like saying you could avoid drowning by walking on water. Worse, even, is that everyone I’ve interviewed apologies for not having a good enough memory. We’ve done no service to our users by making them feel stupid or inadequate.

The Persona team has been busy prototyping better tools that address this set of user needs. Follow us on twitter at @mozillapersona to hear about these experiments and more.


Check out the research that underpins the Persona team:

Identity and the Internet: A study
Some attitudes on Facebook privacy
Privacy and social media: a small German study

Photo credit: Creative Commons license by Valeria Melissia Rosalez

People DO Care About Little “p” Privacy is a post from: Crystal Beasley, UX Designer at Mozilla

[Source: Planet Mozilla]

Tantek Çelik: Why you should say HTML classes, CSS class selectors, or CSS pseudo-classes, but not CSS classes

Search the web for "CSS classes" and you'll find numerous well intentioned references which are imprecise at best, and misleading or incorrect at worst. There are no such things as "CSS classes". Here's why you should refer to HTML classes, CSS class selectors, or even CSS pseudo-classes, but not "CSS classes".

Terminology Summary Table

Wondering what to call "classes" and don't care why? Here's a handy table of examples and terms.

the thing what to call it
class="h-card vcard" class attribute, HTML class attribute
h-card vcard classes, class names, HTML classes / class names, or class attribute value
h-card class, class name, or HTML class (name)
.h-card class selector, or CSS class selector
:active pseudo-class, pseudo-class selector, or CSS pseudo-class (selector) - except for :first-letter,:first-line,:before,:after
::first-letter pseudo-element, pseudo-element selector, or CSS pseudo-element (selector)

Why "CSS classes" is imprecise or incorrect

There's no such thing as a CSS "class". Optimistically what may be happening is that people who say "CSS class" are using it as an imprecise shorthand for "CSS class selector". However, in articles / conversation I've seen / heard that's not the case. They're incorrectly referring to the "class" itself, just the class name, e.g. "h-card" in the above table/examples.

Both "h-card" and "vcard" are just "classes" or "class names" (as well as being root class names for the h-card / hCard microformats). If you need to be explicit that you're talking about web technologies, prefix the phrases with "HTML", e.g. "HTML class(es)" or "HTML class name(s)".

Why saying "CSS classes" is bad practice

This isn't just pedanticism. By using the phrases "CSS class(es)" or "CSS class name(s)" you're not only being imprecise (or just plain wrong), you're tying the presentational context/framing of "CSS" to class names which implies and even encourages the bad practice of using presentational class names.

Why saying "HTML classes" is good practice

In conversations, discussions, and especially when teaching workshops, it's better to be consistent about calling them "HTML class(es)" because that more accurately refers to their effects on structure and semantics. It's upon that structure and those semantics that we can then write whatever CSS rules we need to achieve the design of the day/week/month (which inevitable gets changed/tweaked far more often than the underlying markup / classes).

Hat-tip to Jonathan Neal, who asked about a "css naming convention guide" in a certain Freenode CSS related IRC channel, to which I answered text similar to the above and then decided I should blog it for reference. Oh, about class naming conventions: a big part of where microformats came from was the desire to come up with naming conventions for common components/chunks/modules in pages like people, organizations, events, reviews, etc. Want to explore more such class naming conventions? Join us in IRC: Freenode: #microformats, or if you're in the San Francisco Bay Area, come by the microformats meetup & drinkup tonight (Facebook, Google+, Plancast).

[Source: Planet Mozilla]

Christian Heilmann: Conditional loading of resources with mediaqueries

Here is a quick idea about making mediaqueries not only apply styles according to certain criteria being met, but also loading the resources needed on demand. You can check a quick and dirty screencast with the idea or just read on.

Mediaqueries are very, very useful things. They allow us to react to the screen size and orientation and even resolution of the device our apps and sites are shown in. That is in and of itself nothing new – in the past we just used JavaScript to read attributes like window.innerWidth and reacted accordingly but with mediaqueries we can do all of this in plain CSS and can add several conditions inside a single style sheet.

In addition to the @media selectors in a style sheet we can also add a media attribute to elements and make them dependent on the query. So for example if we want to apply a certain style sheet only when the screen size is larger than 600 pixels we can do this in HTML:

<link rel="stylesheet" 
      media="screen and (min-width: 601px)" 

Handy isn’t it? And as we applied the mediaquery we only request this file when and if it is needed which means we even save on an HTTP request and don’t suffer the latency issues connected with loading a file over the wire (or over a 3G or EDGE connection). Especially with movies and source elements this can save us a lot of time and traffic. Sadly, though, that is not the case.

Load all the things – even when they don’t apply

Let’s take this HTML document:

<html lang="en-US">
  <meta charset="UTF-8">
  <style type="text/css">
    body { font-family: Helvetica, Arial, sans-serif; }
    p    { font-size: 12px; }
  <link rel="stylesheet"
        media="screen and (min-width: 600px)" 
  <link rel="stylesheet"
        media="screen and (min-width: 4000px)" 
  <title>CSS files with media queries</title>
<p>Testing media attributes</p>

If your screen is less than 600 pixels wide the paragraph should be 12px in size, over 600 pixels it is 20px (as defined in small.css) and on a screen more than 4000 pixels wide (not likely, right?) it should be 200px (as defined in big.css).

That works. So we really do not need to load big.css, right? Sadly enough though all the browsers I tested in do. This seems wasteful but is based on how browsers worked in the past and – I assume – done to make rendering happen as early as possible. Try it out with your devtools of choice open.

Chrome loading both CSS files
Firefox loading both CSS files

I am quite sure that CSS preprocessors like SASS and LESS can help with that, but I was wondering how we could extend this idea. How can you not only apply styles to elements that match a certain query, but how can you load them only when and if they are applied? The answer – as always – is JavaScript.

Matchmedia to the rescue

Mediaqueries are not only applicable to CSS, they are also available in JavaScript. You can even have events firing when they are applied which gives you a much more granular control. If you want a good overview of the JavaScript equivalent of @media or the media attribute, this article introducing matchmedia is a good start.

Using matchmedia you can execute blocks of JavaScript only when a certain mediaquery condition is met. This means you could just write out the CSS when and if the query is true:

if (window.matchMedia('screen and (min-width: 600px)')){
  document.write('<link rel="stylesheet" 

Of course, that would make you a terrible person, as document.write() is known to kill cute kittens from a distance of 20 feet. So let’s be more clever about this.

Instead of applying the CSS with a link element with a href which causes the undesired loading we dig into the toolbox of HTML5 and use data attributes instead. Anything we want dependent on the query, gets a data- prefix:

<link rel="stylesheet" class="mediaquerydependent" 
      data-media="screen and (min-width: 600px)" 
<link rel="stylesheet" class="mediaquerydependent" 
      data-media="screen and (min-width: 4000px)" 

We also add a class of mediaquerydependent to give us a hook for JavaScript to do its magic. As I wanted to go further with this and not only load CSS but anything that points to a resource, we can do the same for an image, for example:

<img data-src="" 
     data-media="screen and (min-width: 600px)">

All that is missing then is a small JavaScript to loop through all the elements we want to change, evaluate their mediaqueries and change the data- prefixed attributes back to real ones. This is that script:

  var queries = document.
      all = queries.length,
      cur = null,
      attr = null;
  while (all--) {
    cur = queries[all];
    if ( &&
        window.matchMedia( {
      for (attr in cur.dataset) {
        if (attr !== 'media') {
          cur.setAttribute(attr, cur.dataset[attr]);

Here is what it does:

  1. We use querySelectorAll to get all the elements that need the mediaquery check and loop over them (using a reverse while loop).
  2. We test if the element has a data-media property and if the query defined in it is true
  3. We then loop through all data-prefixed attributes and add a non-prefixed attribute with its value (omitting the media one)

In other words, if the condition of a minimum width of 600 pixels is met our image example will become:

<img data-src="" 
     data-media="screen and (min-width: 600px)">

This will make the browser load the image and apply the alternative text.

But, what if JavaScript is not available?

When JavaScript is not available you have no problem either. As you are already in a fairyland, just ask a wandering magician on his unicorn to help you out.

Seriously though, you can of course provide presets that are available should the script fail. Just add the href of a fallback which will always be loaded and replaced only when needed.

<link rel="stylesheet" class="mediaquerydependent" 
      data-media="screen and (min-width: 600px)" 

This will load standard.css in any case and replace it with green.css when the screen is more than 600 pixels wide.

Right now, this script only runs on first load of the page, but you could easily run it on window resize, too. As said, there are even events that get fired with matchmedia but pending testing according to the original article this is still broken in iOS, so I wanted to keep it safe. After all mediaqueries are there to give the user what they can consume on a certain device – the use case of resizing a window to see changes is more of a developer thing.

This could be used to conditionally load high resolution images, couldn’t it? You can grab the code on GitHub and see it in action here.

[Source: Planet Mozilla]

David Walsh: CSS calc

CSS is a complete conundrum; we all appreciate CSS because of its simplicity but always yearn for the language to do just a bit more. CSS has evolved to accommodate placeholders, animations, and even click events. One problem we always thought we’d have with CSS, however, was its static nature; i.e. there’s really no logic, per se. The CSS calc routine bucks that trend, providing developers an ounce of programming ability within CSS.


The calc routine is especially useful when calculating relative widths. Calculations can be additions, subtractions, multiplications, and divisions. Have a look:

/* basic calc */
.simpleBlock {
	width: calc(100% - 100px);

/* calc in calc */
.complexBlock {
	width: calc(100% - 50% / 3);
	padding: 5px calc(3% - 2px);
	margin-left: calc(10% + 10px);

WebKit and Opera currently require a browser prefix for calc(). Be sure to use whitespace around operators so that they aren’t construed as positive and negative notations.

CSS calc is another example of CSS taking over the role of JavaScript in layout, and I think it’s a great thing. As we move more toward responsive design, calc allows us to use percentages for overall block widths, with just a touch of pixel help for some of its components. Have you used calc for your apps? Share how you used calc!

Read the full article at: CSS calc

Hosted by Media Temple, domain from

[Source: Planet Mozilla]

More News

Sponsored by: