Tuesday, April 30, 2013

Open Education Around the World – A 2013 Open Education Week Summary

Open Education Around the World – A 2013 Open Education Week Summary:
Creative Commons congratulates all those who participated in the second annual Open Education Week March 11-15, 2013. It’s impressive to see how global open education has become with contributors from over 30 different countries showcasing their work and more than 20,000 people from over 130 countries visiting the Open Education Week website during the week. Open Education Week featured over 60 webinars open to participation from anyone and numerous local events and workshops around the world.
We thought we’d highlight a few Creative Commons global affiliate events from Open Education Week and share a list of urls for Open Education Week webinar recordings the Open Courseware Consortium has published.
The Creative Commons China Mainland team successfully held an Open Education Forum on the afternoon of March 16th at Renmin University of China, Beijing. One highlight of this salon worth special attention is the Toyhouse team from Tsinghua University led by Prof. Benjamin Koo, and their recent project eXtreme Learning Process (XLP). This team is a inspiring example of innovative learning, and a user of CC licenses and OER.
Tobias Schonwetter, Creative Commons regional coordinator for Africa gave an Open Education for Africa presentation explaining why Creative Commons is so important for Open Educational Resources.
The School of Open launched with:

  • 17 courses, including 4 facilitated courses and 13 stand-alone courses (for participants to take at their own pace).

  • ~15 course organizers, affiliated with several organizations/initiatives, including: the National Copyright Office of Australia; University of Michigan’s Open.Michigan; Kennisland/CC Netherlands; Communicate OER, a Wikipedia initiative; Open Video Forum (xm:lab, Academy of Fine Arts Saar); Jamlab (a high school mentorship program in Kenya); Wikimedia Germany and CC Germany

These are just the tip of the rich global discourse that took place during Open Education Week. All webinars during Open Education Week were recorded, with links listed below. You can also view the videos directly on the Open Education Week YouTube channel and on the Open Education Week website, under events and webinars.
Webinar recordings
Monday, March 11
·       Building Research Profile and Culture with Open Access 

·       Learners orchestrating their own learning

·       Learning Innovations and Learning Quality: The future of open education and

        free digital resources

·       Näin käytät ja teet avoimia sisältöjä /How to use and create open content

·       New global open educational trends: policy, learning design and mobile

·       The multiple facets of Openness in #udsnf12  

·       Licencias Creative Commons para recursos educativos, ¿que son? ¿como usarlas?

·       Designing OER with Diversity In Mind

·       وسائل تعليمية تشاركية : تطوير الوسائل التعليمية تشاركيًا باستخدام أداة الابتكار ومنصة  سكراتش البرمجية

·       Driving Adoptions of OER Through Communities of Practice

·       Khan Academy: Personalized learning experiences

·       Good practices on open content licensing

Tuesday, March 12

·       OCW in the European Higher Education Context: How to make use of its full

        potential for virtual mobility

·       OLDS MOOC Grand finale (final convergence session)  

·       Äidinkielestä riippumaton suomen kielen opetus

·       Opening Up Education

·       CourseSites by Blackboard: A Free, Hosted, Scalable Platform for Open

        Education Initiatives

·       Xpert Search Engine and the Xpert Image Attribution Service

·       Capacitación para la educación abierta: OportUnidad en Latinoamérica

·       Language learning independent of mother language

·       Interactive Learning with Wolfram Technologies

·       Collaborative Boldly Confronts Licensing Issues

·       Buenas prácticas en el uso de licencias para contenidos abiertos

Wednesday, March 13
·       是“誰”在使用你的開放式課程網站呢?

·       The interaction, co-construction and sharing of Netease Open Courses


·       Who is using your OCW site?

·       Políticas nacionales de Acceso Abierto en Argentina

·       Open Policy Network: seeking community input

·       OER Commons Green: A Unique Lens on Open Environmental Education

·       Creative Commons 4.0 Licenses: What's New for Education?

·       How Community Colleges are Innovating with Open Educational Resources

·       P2PU: A Showcase of Open Peer Learning

Thursday, March 14
·       Open Access policy development at the University of Pretoria: the why, what

        and how?

·       What you can learn from the UKOER experience

·       Why Open Access is Right for the World Bank

·       What's behind Open Education? A philosophical insight

·       Utilizing OER to Create a Pathway Towards an Affordable Degree  

·       Toolkit Working Group: Tools to help users discover the content they need (1)

·       Learning toys for free: Collaborative educational tools development using

        MakeyMakey and SCRATCH platforms

·       Teach Syria: The Impact of Teaching Global to Today's Youth

·       Re-Creative Commons

·       Validating the Learning Obtained through Open Educational Resources

·       OER and Alternative Certification Models: An Analysis Framework

·       The Open Educational Resources in Brazil as an Instrument to Get Access to

        Qualification, The Government Role at OERs Creation & FGV and

        São Paulo State Case Studies

Friday, March 15
·       Open Education for Africa

·       National policies of Open Access to scientific outputs in Argentina

·       Re-thinking Developmental Education: Creating a STEM Bridge in the National

        STEM Consortium

·       Toolkit Working Group: Tools to aid and encourage use of OERs in teaching

·       Crowd-sourced Open Courseware Authoring with SlideWiki.org

·       Using OER to reduce student cost and increase student learning

·       What's next? An open discussion about open education

·       OpenStax College Textbooks: Remixable by Design

·       An OER Editor for the Rest of Us

Kudos to the OCW Consortium for organizing this event. We look forward to next years.


DRM in HTML5 Is a Bad Idea

DRM in HTML5 Is a Bad Idea:

Creative Commons strongly believes in the respect of copyright and the wishes of content creators. That’s why CC has created a range of legal tools that rely in part on copyright to enable our vision of a shared commons of creative and intellectual works.
But when creators’ rights come at the expense of a usable internet, everyone suffers for it. Over the past 15 years, various companies have started using mechanisms to limit the ways in which users can use their content. These digital rights management (DRM) techniques make the internet less usable for everyone. CC believes that no DRM system is able to account for the full complexity of the law, since they create black-and-white situations where legally there is wiggle room (such as for fair use, for example). This failing causes DRM to limit consumer freedoms that would otherwise be permitted, and that can create very real harm to consumers. Examples abound, but a recent one can be seen in this report on how DRM and the DMCA have seriously limited the ability of the visually impaired to have access to e-books they can use over the past 15 years.
The W3C recently published a draft proposal that would make DRM a part of HTML5. While CC applauds efforts to get more content distributed on the web, DRM does more harm than good. In addition to limiting consumer freedoms, it’s not at all clear this proposal would even be effective in curbing piracy. Given the proposal’s architecture, it will cause dependence on outside components which will not be a part of the standardized web. A standardized web is essential to allow anyone to participate in it without locking them into giving any one player a say on what proprietary device, software, or technology they need to use. The proposal opens up exactly such a dependency: it allows web pages to require specific proprietary software or hardware to be installed. That a dangerous direction for the web, because it means that for many real-life uses it will be impossible to build end-to-end open systems to render web content.
Read EFF’s post on defending the open web from DRM for more details on the proposal, history, threat. Get the facts and, if you’re interested, sign the Free Software Foundation’s petition to oppose DRM in web standards.


Open Textbook Summit

Open Textbook Summit:
On April 8 & 9, 2013 BCcampus hosted, and Creative Commons facilitated, an Open Textbook Summit in Vancouver British Columbia Canada. The Open Textbook Summit brought together government representatives, student groups, and open textbook developers in an effort to coordinate and leverage open textbook initiatives.
Participants included:

BC Ministry of Advanced Education, Innovation and Technology (AEIT)

Creative Commons

eCampus Alberta

Alberta Enterprise & Advanced Education

The 20 Million Minds Foundation

Washington Open Course Library

University of Minnesota Open Textbook Catalogue

Lumen Learning


Open Courseware Consortium


Student Public Interest Research Groups

Right to Research Coalition

Canadian Alliance of Student Associations (CASA)
California and British Columbia recently announced initiatives to create open textbooks for high enrollment courses. Susan Brown in her welcoming remarks on behalf of the Deputy Minister of Advanced Education, Innovation and Technology noted the Open Textbook Summit was “a unique opportunity to share information about the work underway in our respective jurisdictions and organizations to capitalize on lessons learned; to identify common areas of interest; and to discover potential opportunities for collaboration. The real power of a project like this is only realized by working together.”
On the summit’s first day the BC government announced it was “Moving to the next chapter on free online textbooks” releasing a list of the 40 most highly enrolled first and second-year subject areas in the provincial post-secondary system.
Over the course of the summit participants identified existing open textbooks that could be used for BC’s high enrollment courses. Development plans for creating additional open textbooks were mapped out. Strategies for academic use of open textbooks were discussed ranging from open textbooks for high enrollment courses to zero textbook degree programs where every course in a credential has an open textbook.
Open textbook developers described the tools they are using for authoring, editing, remixing, repository storage, access, and distribution. Participants discussed the potential for creating synergy between initiatives through use of common tools and processes.
Measures of success, including saving students money and improved learning outcomes, were shared and potential for a joint open textbook research agenda explored. The summit concluded with suggestions from all participants on ways to collaborate going forward. David Porters recommendation of an ongoing Open Textbook Federation was enthusiastically endorsed.
Mary Burgess created a Google group called The Open Textbook Federation for further conversations and collaborations. This group is open to anyone currently working on, or thinking of working on, an Open Textbook Project. Notes from the Open Textbook Summit are posted online. Clint Lalonde created a Storify of the Twitter conversation captured during the summit.
The Open Textbook Summit was an incredible day and a half of learning. The sharing of insights, experiences, hopes, and ideas left everyone energized with a commitment to join together in a cross-border federation that collaborates on open textbooks.


Digital Public Library of America Launches

Digital Public Library of America Launches:

Creative Commons would like to congratulate the Digital Public Library of America on its official launch today. The DPLA, which has been in planning since 2010, brings together millions of digital resources from numerous libraries, archives, and museums.
From DPLA:

The Digital Public Library of America will launch a beta of its discovery portal and open platform at noon ET today. The portal will deliver millions of materials found in American archives, libraries, museums, and cultural heritage institutions to students, teachers, scholars, and the public. Far more than a search engine, the portal will provide innovative ways to search and scan through its united collection of distributed resources. Special features will include a dynamic map, a timeline that allow users to visually browse by year or decade, and an app library that provides access to applications and tools created by external developers using DPLA’s open data.

In January, DPLA announced that all of its metadata would be in the public domain under the CC0 Public Domain Declaration. The Open Knowledge Foundation’s Joris Pekel applauded that announcement:

The decision to apply the CC0 Public Domain waiver to the metadata will greatly improve interoperability with Europeana, Europe’s equivalent of the DPLA. Now that more different initiatives start publishing digitised heritage and its metadata, interoperability becomes more and more important in order to create a linked web of cultural heritage data, instead of new data silos. By both choosing the CC0 Public Domain waiver, Europeana and the DPLA take a great a step forward in achieving their goal.

We applaud DPLA’s commitment to open data and are excited about the launch of such an important resource.


Windows themes for the armchair traveler

Windows themes for the armchair traveler:
One of the things I love most about my job is that I get to see thousands of images submitted by artists and photographers from around the world, through the Open Call project. From sweeping landscapes and cityscapes to macro images that show the tiny details of a dried leaf or the reflections in a drop of water, it’s an armchair traveler’s dream job —although we do get so many submissions that it can be a bit overwhelming, too. From the many thousands that are submitted to us, I try to bring you the very best images in themes and as wallpapers on the Personalization Gallery.
Let me bring you along on one of my armchair tours with a set of new Open Call themes that take us from New Zealand to India, with stopovers along the way in Canada, Germany, Sardinia, Macedonia, and the UK.
Our trip begins in New Zealand. I love Ian Rushton’s gorgeous HDR landscape photography, so I am excited to bring you his first theme on the Windows Personalization Gallery, which was just published today. With this theme we take a vicarious vacation to the beaches of New Zealand’s West Coast, complete with lapping waves and the cries of seagulls. By the way, stay tuned in upcoming weeks for additional themes from Ian Rushton showcasing more of New Zealand’s beauty! If you ever wondered why so many films are shot in NZ, you will wonder no more.
Next stop is Canada. With Winter Garden, photographer Hayley Elizabeth gives us a frosty follow-up to her previous two themes, Garden Life and Garden Life 2. This time, her camera is trained on delicate ice crystals and dried seed pods instead of lush flowers and buzzing bees, but there is still beauty to be found in a garden, even when the ground is frozen and nothing is blooming. Besides, it’s the one time of year when I don’t need to feel guilty about putting off weeding.
We detour now to coastal Germany with photographer Frank Hojenski, who captures both stormy drama and windswept serenity in images that include beachscapes and lakesides from around Mecklenburg-Vorpommern, as well as boats bound for Denmark on the Baltic Sea. I like the moody lighting of his photographs.
Our travels take us next to one of the most beautiful islands in the Mediterranean Sea; Sardinian Shores showcases the work of Italian photographer Giovanni Cultrera in stunning shots of Sardinia’s starkly beautiful granite beaches and gleaming waterfront cityscapes. The water in Cultrera’s photos reflects misty sunset hues in some images and is startlingly limpid and clear in others.
Our next outing is a nature walk with Macedonian photographer Slavco Stojanoski, who focuses on water in a smaller way. His gorgeous macro photographs reveal tiny, glassy worlds in raindrops and dew, strung like beads on a spider’s web, or jeweling the surface of a leaf.
London is our next stop, and the architecture of London has never looked more stunning than in the photographs of Imran Mirza. His innovative images show famous landmarks such as the Tower Bridge, Canary Wharf, the Millennium Bridge, the London Eye, and Buckingham Palace in a dramatic new light.
The last stopover on our virtual flight is a visit to the parks and gardens of Nagpur with photographer Mayur Kotlikar. In a follow-up to his popular Bees theme, Kotlikar provides another collection of exquisite macro images; this time showcasing the colorful diversity of butterflies found in and around the winter capital of Maharashtra, India.
That concludes our armchair travels for this week. You may return your recliner to its upright position while we taxi gently to a close. Please fly with us again in the future, as we take you and your Windows desktop to more beautiful locations around the world. No tickets, no standing in line, and best of all, no airport security required. Thanks again for traveling with us; this is your captain, signing off.
Update: I can't believe that in this post I totally forgot to mention the one new theme that celebrates an armchair traveler's true best friend: BOOKS!
Stack of old books
When I was a kid I wanted to grow up to be a librarian, because I thought that meant I would be able to sit around all day reading books. The new Beauty of Books theme harks back to this childhood dream by showcasing some of the most illustrious libraries in the world, and the gorgeous old books to be found in them.
Baroque library hall with ceiling artwork by Jan Hiebl, Clementinum, Prague, Czech Republic
I even scheduled this theme to coincide with National Library Week (4/14-4/20)… and then I forgot to mention it. D'oh! Well, even though Library Week is over now, I still hope you'll enjoy this theme in honor of your local library and in appreciation of all the people who work to make books available to everyone.


Free Book Choice with Audible and Windows 8

Free Book Choice with Audible and Windows 8:
I love books. I love them so much that there was a time in my life where I helped the school librarian every day instead of going to recess. That period of time lasted for almost two years. Even now, if given the choice between a book and doing a lot of other things, I’ll still choose the book. As I’ve grown older, I’ve come to also enjoy the audiobook. This way I can go for a run and “read” a book or drive to Vancouver without having “Thrift Shop” stuck in my head for the next week – not that I have anything against the song.
Luckily, I found Audible. Audible has apps for Windows 8 and Windows Phone that let you download and listen to books on the go. With over 135,000 titles from classics to New York Times bestsellers, you can enjoy endless hours of entertainment.
And right now Audible is running an exclusive promotion for Windows Phone and Windows 8 users:
New Audible customers who register with Audible through the Windows 8 or Windows Phone app will receive a free book (no subscription or credit card required).
You can select your book from among the following great titles:
Offer good for a limited time only and while promotional supplies last.
From one Gatsby fan to another, this is a deal you don’t want to miss.


The Private Cloud Blog has combined its efforts with Building Clouds

The Private Cloud Blog has combined its efforts with Building Clouds:
Hey everyone,
I wanted to give you an update on things since we have been a little quiet on the Windows Server Blog over the last couple of months. Clearly, we are in the down section of the blog cycle. Once the BIG Stories about Windows Server 2012 have been told, the technology teams pick up the slack and start getting into the nitty gritty of designing, building, deploying and supporting the Modern Data Center.
Speaking of a technology team picking up the slack.... Almost 3 years ago we launched the Private Cloud Architecture Blog. At the time it was pretty heavy on theory and pretty light on details. Over the last year the blog really picked up momentum and started getting much more detailed into what we mean by building a private cloud. Now, that Windows Server 2012 and System Center 2012 Sp1 are generally available, the balance seriously has shifted to a pretty heavy load of detailed content on building a Modern Data Center.
Earlier this month, we merged the Private Cloud Blog with a newly launched blog called Building Clouds. So, now we have two blogging teams extending our charter from talking just about Private Clouds to talking about Building Clouds whether that involves an all on-premise deployment, Hybrid IT deployment or an all Hosted or Cloud Deployment.  We've brought together some of the brightest people across engineering and the field to bring you the latest and up to date thinking on how you can best leverage the Cloud OS and System Center to continue to move your Data Center towards a more robust Cloud Architecture.
I encourage you to put the Building Clouds: Cloud & Datacenter Solutions on your short list of blogs to read. This team has been on fire since the merger and the amount of valuable content has been quite amazing to take in.
Kevin Beares
Senior Community Lead - Windows Server and System Center


I'll be speaking at Percona Live April 24th in Ballroom F

I'll be speaking at Percona Live April 24th in Ballroom F:


Sharding-splitting data for a single database server onto many database servers is a method to scale horizontally and is needed to get more disk IOPS from a mechanical hard drive server architecture. A method that works yet has pitfalls, which this session talks about. The focus is what happens when Solid State Disk Drives replaces traditional mechanical hard drives (spinning metal) in a sharded environment and answers to questions like

How much more IOPS with SSD?

What Raid Levels and Controllers work SSD drives?

How do you migrate data from shards to increase density on SSD Shards?

Why Multi MySQL, Instances per SSD server is great.

How INNODB compression really helps in an SSD environment.

This is my 5th or 6th Presentation since the conferences began. I am looking forward to spending time with old friends and people who love to talk about scale, data, technical stuff-like you! Come see me when you get a chance. Note: I will post my slides here.


Two Governments, Both Alike In Dignity

Two Governments, Both Alike In Dignity:
Disclaimer: I’m engaged to Frances Berriman, the front-end lead at the UK’s Government Digital Service. She did not approve this post. It is, however, the product of many of our discussions. You’ll understand shortly why this is relevant.
It seems entirely rational to be skeptical about governments doing things well. My personal life as a migrant to the UK is mediated by heaving bureaucracies, lawyer-greased wheels, and stupefyingly arbitrary rules. Which is to say nothing of the plights of my friends and co-workers — Google engineers, mostly — whose migration stories to the US are horrifying in their idiocy. Both the US and UK seem beholden to big-stupid: instead of trying to attract and keep the best engineers, both countries seem hell-bent to keep them out. Heaven forbid they make things of value here (and pay taxes, contribute to society, etc.)! It takes no imagination whatsoever for me to conjure the banality and cruelty that are the predictable outcomes of inflexible, anachronistic, badly-formulated policy.
You see it perhaps most clearly when this banality is translated to purely transactional mediums. PDFs that you must fax — only to have a human re-type the results on the other end. Crazy use of phones (of course, only during business hours). Physical mail — the slowest and worst thing for a migrant like myself — might be the most humane form of the existing bureaucracy in the US. Your expectations are set at “paper”, and physical checks and “processing periods” measured in weeks feel somehow of a piece. It has always “worked” thus.
It’s befuddling then to have been a near witness to nothing short of the wholesale re-building of government services here in the UK to be easiest to navigate by these here newfangled computrons. And also just flat-out easy. The mantra is “digital by default”, and they seem to be actually pulling it off. Let me count the ways:
  1. High-level policy support for the effort
  2. Working in the open. Does your government do its development on github?
  3. Designing services for the humans that use them, not the ones who run them
  4. Making many processes that were either only-physical or damn infuriating near-trivial to do online
  5. Making key information understandable. Give the UK “limited corporation” page a view. Now compare to the California version. Day, meet night.
  6. Saving both government and citizens massive amounts of money in the process
They even have a progress bar for how many of the ministries have been transformed in this way.
Over the same timeframe I’ve known a few of the good folks who have put themselves in the position of trying to effect changes like this at Code for America. It’s anathema in the valley to say anything less than effusive about CFA — anything but how they’re doing such good, important work. How CFA has the potential to truly transform the way citizens and government interact. Etc, etc. And it’s all true. But while CFA has helped many in the US understand the idea that things could be better, the UK’s Government Digital Service has gone out and done it.
So what separates them?
First, the sizes of the challenges need to be compared. The US has 5x the population, an economy that’s 6x larger, and a federalist structure that makes fixing many problems more daunting than most UK citizens can possibly imagine. Next, it should be noted that London is a better place to try to hire the Right People (TM). Yes, it’s much more expensive to live here, but software salaries are also much lower (both in relative and absolute terms). There wasn’t as much tech going on here as in the valley to start with, and the gold-rush to produce shiny but less competent versions of existing websites for world+dog (aka: “the app ruse”) hasn’t created the engineering hiring frenzy here that it has stateside. There’s also a general distrust in the American psyche about the core proposition of the government doing good things. Public-spiritedness seems to so many of my generation a sort of dusty nostalgia that went the way of hemp and tie-dye. Close encounters with modern American government do little to repair the image.
But all of those seem surmountable. The US has more of everything, including the Right People (TM). Indeed, the UK is managing an entire first-world’s set of services on a smaller GDP. Why then do US public services, to be blunt, largely still suck?
The largest differences I’ve observed are about model. Instead of having a mandate to change things from the inside, the organizational clout to do it, and enough budget to make a big dent out of the gates (e.g. gov.uk) CFA is in the painful position of needing to be invited while at the same time trying to convince talented and civic-minded engineers and designers to work for well below industry pay for a limited time on projects that don’t exist yet.
Put yourselves in the shoes of a CFA Fellow: and your compatriots are meant to help change something important in the lives of citizens of a municipality that has “invited” you but which is under no real pressure to change, has likely moved no laws or contracts out of the way to prepare for your arrival, and they know you’re short-timers. Short-timers that someone else is taking all the risk on and paying for? What lasting change will you try to effect when you know that you’ve got a year (tops) and that whatever you deliver must be politically palatable to entrenched interests? And what about next year’s Fellows? What will they be handed to build on? What lasting bit of high-leverage infrastructure and service-design will they be contributing to?
The contrast between that and the uncomfortably-named “civil servants” of the GDS could not be more stark. I don’t get the sense that any of them think their job is a lifetime assignment — most assume they’ll be back at agencies any day now, and some of the early crop have already moved on in the way nerds tend to do — but at the pub they talk in terms of building for a generation, doing work that will last, and changing the entire ethos of the way services are produced and consumed. Doing more human work. And then they wake up the next morning and have the authority and responsibility to go do it.
I don’t want to be down on CFA. Indeed, it feels very much like the outside-government precursor to the GDS: mySociety. mySociety was put together by many of the same public-spirited folks who initially built the Alpha of what would a year later become gov.uk and the GDS. Like CFA, mySociety spent years pleading from the outside, making wins where it could — and in the process refining ideas of what needed to change and how. But it was only once the model changed and they grabbed real leverage that they were able to make lasting change for the better. I fear CFA and the good, smart, hard-working people who are pouring their lives into it aren’t missing anything but leverage — and won’t make the sort of lasting change they want without it. CFA as an organization doesn’t seem to understand that’s the missing ingredient. America desperately needs for its public services to make the same sort of quantum leap that the UK’s are making now. It is such an important project, in fact, that it cannot be left to soft-golved, rose-tinted idealism. People’s lives are being made worse by mis-placed public spending, badly executed projects, and government services that seem to treat service as an afterthought. CFA could be changing this, and we owe it to ourselves and our friends there to ask clearly why that change hasn’t been forthcoming yet.
The CFA Fellows model has no large wins under its belt, no leverage, and no outward signs of introspection regarding its ability to deliver vs. the GDS model. Lets hope something larger is afoot beneath that placid surface.
Update: I failed to mention in the first version of this post that the one of the largest philosophical differences between the two countries is the respective comfort levels with technocratic competence. There exists a strain of fatalism about government in the US that suggests that because government doesn’t often do things well, it shouldn’t try. It’s a distillation of the stunted worldviews of the libertarian and liberal-tarian elite and it pervades the valley. Of course governments that nobody expects anything of will deliver crappy service; how could it be otherwise?
What one witnesses here in the UK is the belief that regardless of what some theory says, it’s a problem when government does its job badly. To a lesser degree than I sense in years past, but still markedly moreso than in the US, the debate here isn’t about can the government get good at something, but why isn’t it better at the things the people have given it responsibility for?
As a result, the question quickly turns how one can expect a government to manage procurement of technical, intricate products for which it’s the only buyer (or supplier) without the competence to evaluate those products — let alone manage operations of them. Outsourcing’s proponents had their go here and enormous, open-ended, multi-year contracts yielded boondoggle after boondoggle. By putting contractors in a position of power over complexity, and starving the in-house experts of staffing and resources to match, the government forfeited it’s ability to change its own services to meet the needs of citizens. What changed with gov.uk was that the government decided that it had to get good at the nuts and bolts of delivering the services, outsourcing bits and pieces of small work, but owning the whole and managing it in-house. Having the Right People (TM) working on your team matters. If they’re at a contractor, they have a different responsibility and fiduciary duty. When the ownership of the product is mostly in-house, ambiguities borne of incomplete contract theory are settled in favor of the citizen (or worst case, government) interest, not the profit motive.
The gov.uk folks say “government should only do what only government can do”, but my observation has been that that’s not the end of the discussion: doing it well and doing it badly are still differentiable quantities. And doing better by citizens is good. Clearing space to do good is the essential challenge.


Why What You’re Reading About Blink Is Probably Wrong

Why What You’re Reading About Blink Is Probably Wrong:
By now you’ve seen the news about Blink on HN or Techmeme or wherever. At this moment, every pundit and sage is attempting to write their angle into the annoucement and tell you “what it means”. The worst of these will try to link-bait some “hot” business or tech phrase into the title. True hacks will weave a Google X and Glass reference into it, or pawn off some “GOOGLE WEB OF DART AND NACL AND EVIL” paranoia as prescience (sans evidence, of course). The more clueful of the ink-stained clan will constrain themselves to objective reality and instead pen screeds for/against diversity despite it being a well-studied topic to which they’re not adding much.
May the deities we’ve invented forgive us for the tripe we’re about to sell each other as “news”.
What’s bound to be missing in most of this coverage is what’s plainly said, if not in so many words, in the official blog post: going faster matters.
Not (just) code execution, but cycle times: how long does it take you to build a thing you can try out, poke at, improve, or demolish? We mere humans do better when we have directness of action. This is what Bret Victor points us towards — the inevitable constraints of our ape-derived brains. Directness of action matters, and when you’re swimming through build files for dozens of platforms you don’t work on, that’s a step away from directness. When you’re working to fix or prevent regressions you can’t test against, that’s a step away. When compiles and checkouts take too long, that’s a step away. When landing a patch in both WebKit and Chromium stretches into a multi-day dance of flags, stub implementations, and dep-rolls, that’s many steps away. And each step hurts by a more-than-constant factor.
This hit home for me when I got my first workstation refresh. I’d been working on Chrome on Windows for nearly a year in preparation for the Chrome Frame release, and all the while I’d been hesitant to ask for one of the shiny new boxes that the systems people were peddling like good-for-you-crack — who the hell was I to ask for new hardware? They just gave me this shiny many-core thing a year ago, after all. And I had a linux box besides. And a 30″ monitor. What sort of unthankful bastard asks for more? Besides, as the junior member of the team, surely somebody else should get the allocation first.
Months later they gave me one anyway. Not ungrateful, I viewed the new system with trepidation: it’d take a while to set up and I was in the middle of a marathon weekend debugging session over a crazy-tastic re-entracy bug in a GCF interaction with urlmon.dll that was blocking the GCF launch. If there was a wrong time to change horses, surely this was it. At some point it dawned that 5-10 minute link times provided enough time to start staging/configuring at the shiny i7 box.
A couple of hours later the old box was still force-heating the eerily dark, silent, 80-degree floor of the SF office — it wasn’t until a couple of weeks later that I mastered the after-hours A/C — when my new, even hotter workstation had an OS, a checkout, compiler, and WinDBG + cargo-culted symserver config. One build on the new box and I was hooked.
5-10 minute links went to 1-2…and less in many cases because I could now enable incremental linking! And HT really worked on the i7′s, cutting build times further. Hot damn! In what felt like no-time at all, my drudgery turned to sleuthing/debugging bliss (if there is such a thing). I could make code changes, compile them, and be working with the results in less time than it took to make coffee. Being able to make changes and then feel them near-instantly turned the tide, keeping me in the loop longer, letting me explore faster, and making me less afraid to change things for fear of the time it would take to roll back to a previous state. It wasn’t the webdev nirvana of ctrl-r, but it was so liberating that it nearly felt that way. What had been a week-long investigation was wrapped up in a day. The launch was un-blocked (at least by that bug) and the world seemed new.
The difference was directness.
The same story repeats itself over and over again throughout the history of Chrome: shared-library builds, ever-faster workstations, trybots and then faster trybots, gyp (instead of Scons), many different forms of distributed builds, make builds for gyp (courtesy of Evan Martin), clang, and of course ninja (also Evan…dude’s a frickin hero). Did I mention faster workstations? They’ve made all the same sort of liberating difference. Truly and honestly, in ways I cannot describe to someone who has not felt the difference between ctrl-r development and the traditional Visual Studio build of a massive project, these are the things that change your life for the better when you’re lashed to the mast of a massive C++ behemoth.
If there is wisdom in the Chrome team, it is that these projects are not only recognized as important, but the very best engineers volunteer to take them on. They seem thankless, but Chrome is an environment that rewards this sort of group-adaptive behavior: the highest good you can do as an engineer is to make your fellow engineers more productive.
And that’s what you’re missing from everything else you’re reading about this announcement today. To make a better platform faster, you must be able to iterate faster. Steps away from that are steps away from a better platform. Today’s WebKit defeats that imperative in ways large and small. It’s not anybody’s fault, but it does need to change. And changing it will allow us to iterate faster, working through the annealing process that takes a good idea from drawing board to API to refined feature. We’ve always enjoyed this freedom in the Chromey bits of Chrome, and unleashing Chrome’s Web Platform team will deliver the same sorts of benefits to the web platform that faster iteration and cycle times have enabled at the application level in Chrome.
Why couldn’t those cycle-time-improving changes happen inside WebKit? After all, much work has happened in the past 4 years (often by Googlers) to improve the directness of WebKit work: EWS bots, better code review flow, improved scripts and tools for managing checkins, the commit queue itself. The results have been impressive and have enabled huge growth and adoption by porters. WebKit now supports multiple multi-process architecture designs, something like a half-dozen network stack plug-ins, and similar diversity at every point where the engine calls back to outside systems for low-level implementation (GPU, network, storage, databases, fonts…you name it). The community is now committed to enabling porters, and due to WebKit’s low-ish level of abstraction each new port raises the tax paid by every other port. As James Robinson has observed, this diversity creates an ongoing drag when the dependencies are intertwined with core APIs in such a way that they can bite you every time you go to make a change. The Content API boundary is Blink’s higher-level “embedding” layer and encapsulates all of those concerns, enabling much cleaner lines of sight through the codebase and the removal of abstractions that seek only to triangulate between opaque constraints of other ports. Blink gives developers much more assurance that when they change something, it’s only affecting the things they think it’s affecting. Moving without fear is the secret of all good programming. Putting your team in a position to move with more surety and less fear is hugely enabling.
Yes, there are losses. Separating ourselves from a community of hugely talented people who have worked with us for years to build a web engine is not easy. The decision was wrenching. We’ll miss their insight, intelligence, and experience. In all honesty, we may have paid too high a price for too long because of this desire to stay close to WebKit. But whatever the “right” timing may have been, the good that will come from this outweighs the ill in my mind.
Others will cover better than I can how this won’t affect your day-to-day experience of WebKit-derived browser testing, or how it won’t change the feature-set of Chrome over-night, or how the new feature governance process is more open and transparent. But the most important thing is that we’ll all be going faster, either directly via Blink-embedding browsers or via benchmarks and standards conformance shaming. You won’t feel it overnight, but it’s the sort of change in model that enables concrete changes in architecture and performance and that is something to cheer about — change is the predicate for positive change, after all.


Asm.js: The JavaScript Compile Target

Asm.js: The JavaScript Compile Target:
Like many developers I’ve been excited by the promise of Asm.js. Reading the recent news that Asm.js is now in Firefox nightly is what got my interest going. There’s also been a massive surge in interest after Mozilla and Epic announced (mirror) that they had ported Unreal Engine 3 to Asm.js – and that it ran really well.

Getting a C++ game engine running in JavaScript, using WebGL for rendering, is a massive feat and is largely due to the toolchain that Mozilla has developed to make it all possible.
Since the release of the Unreal Engine 3 port to Asm.js I’ve been watching the response on Twitter, blogs, and elsewhere and while some developers are grasping the interesting confluence of open technologies that’ve made this advancement happen I’ve also seen a lot of confusion: Is Asm.js a plugin? Does Asm.js make my regular JavaScript fast? Does this work in all browsers? I feel that Asm.js, and related technologies, are incredibly important and I want to try and explain the technology so that developers know what’s happened and how they will benefit. In addition to my brief exploration into this subject I’ve also asked David Herman (Senior Researcher at Mozilla Research) a number of questions regarding Asm.js and how all the pieces fit together.

What is Asm.js?

In order to understand Asm.js and where it fits into the browser you need to know where it came from and why it exists.
Asm.js comes from a new category of JavaScript application: C/C++ applications that’ve been compiled into JavaScript. It’s a whole new genre of JavaScript application that’s been spawned by Mozilla’s Emscripten project.
Emscripten takes in C/C++ code, passes it through LLVM, and converts the LLVM-generated bytecode into JavaScript (specifically, Asm.js, a subset of JavaScript).

If the compiled Asm.js code is doing some rendering then it is most likely being handled by WebGL (and rendered using OpenGL). In this way the entire pipeline is technically making use of JavaScript and the browser but is almost entirely skirting the actual, normal, code execution and rendering path that JavaScript-in-a-webpage takes.
Asm.js is a subset of JavaScript that is heavily restricted in what it can do and how it can operate. This is done so that the compiled Asm.js code can run as fast as possible making as few assumptions as it can, converting the Asm.js code directly into assembly. It’s important to note that Asm.js is just JavaScript – there is no special browser plugin or feature needed in order to make it work (although a browser that is able to detect and optimize Asm.js code will certainly run faster). It’s a specialized subset of JavaScript that’s optimized for performance, especially for this use case of applications compiled to JavaScript.
The best way to understand how Asm.js works, and its limitations, is to look at some Asm.js-compiled code. Let’s look at a function extracted from a real-world Asm.js-compiled module (from the BananaBread demo). I formatted this code so that it’d be a little bit saner to digest – it’s normally just a giant blob of heavily-minimized JavaScript:

Technically this is JavaScript code but we can already see that this looks nothing like most DOM-using JavaScript that we normally see. A few things we can notice just by looking at the code:
  • This particular code only deals with numbers. In fact this is the case of all Asm.js code. Asm.js is only capable of handling a selection of different number types and no other data structure (this includes strings, booleans, or objects).
  • All external data is stored and referenced from a single object, called the heap. Essentially this heap is a massive array (intended to be a typed array, which is highly optimized for performance). All data is stored within this array – effectively replacing global variables, data structures, closures, and any other forms of data storage.
  • When accessing and setting variables the results are consistently coerced into a specific type. For example f = e | 0; sets the variable f to equal the value of e but it also ensures that the result will be an integer (| 0 does this, converting an value into an integer). We also see this happening with floats – note the use of 0.0 and g[...] = +(...);.
  • Looking at the values coming in and out of the data structures it appears as if the data structured represented by the variable c is an Int32Array (storing 32-bit integers, the values are always converted from or to an integer using | 0) and g is a Float32Array (storing 32-bit floats, the values always converted to a float by wrapping the value with +(...)).
By doing this the result is highly optimized and can be converted directly from this Asm.js syntax directly into assembly without having to interpret it, as one would normally have to do with JavaScript. It effectively shaves off a whole bunch of things that can make a dynamic language, like JavaScript, slow: Like the need for garbage collection and dynamic types.
As an example of some more-explanatory Asm.js code let’s take a look at an example from the Asm.js specification:
function DiagModule(stdlib, foreign, heap) {
    "use asm";

    // Variable Declarations
    var sqrt = stdlib.Math.sqrt;

    // Function Declarations
    function square(x) {
        x = +x;
        return +(x*x);

    function diag(x, y) {
        x = +x;
        y = +y;
        return +sqrt(square(x) + square(y));

    return { diag: diag };
Looking at this module it seems downright understandable! Looking at this code we can better understand the structure of an Asm.js module. A module is contained within a function and starts with the "use asm"; directive at the top. This gives the interpreter the hint that everything inside the function should be handled as Asm.js and be compiled to assembly directly.
Note, at the top of the function, the three arguments: stdlib, foreign, and heap. The stdlib object contains references to a number of built-in math functions. foreign provides access to custom user-defined functionality (such as drawing a shape in WebGL). And finally heap gives you an ArrayBuffer which can be viewed through a number of different lenses, such as Int32Array and Float32Array.
The rest of the module is broken up into three parts: variable declarations, function declarations, and finally an object exporting the functions to expose to the user.
The export is an especially important point to understand as it allows all of the code within the module to be handled as Asm.js but still be made usable to other, normal, JavaScript code. Thus you could, theoretically, have some code that looks like the following, using the above DiagModule code:
document.body.onclick = function() {
    function DiagModule(stdlib){"use asm"; ... return { ... };}

    var diag = DiagModule({ Math: Math }).diag;
    alert(diag(10, 100));
This would result in an Asm.js DiagModule that’s handled special by the JavaScript interpreter but still made available to other JavaScript code (thus we could still access it and use it within a click handler, for example).

What is the performance like?

Right now the only implementation that exists is in nightly versions of Firefox (and even then, for only a couple platforms). That being said early numbers show the performance being really, really good. For complex applications (such as the above games) performance is only around 2x slower than normally-compiled C++ (which is comparable to other languages like Java or C#). This is substantially faster than current browser runtimes, yielding performance that’s about 4-10x faster than the latest Firefox and Chrome builds.

This is a substantial improvement over the current best case. Considering how early on in the development of Asm.js is it’s very likely that there could be even greater performance improvements coming.
It is interesting to see such a large performance chasm appearing between Asm.js and the current engines in Firefox and Chrome. A 4-10x performance difference is substantial (this is in the realm of comparing these browsers to the performance of IE 6). Interestingly even with this performance difference many of these Asm.js demos are still usable on Chrome and Firefox, which is a good indicator for the current state of JavaScript engines. That being said their performance is simply not as good as the performance offered by a browser that is capable of optimizing Asm.js code.

Use Cases

It should be noted that almost all of the applications that are targeting Asm.js right now are C/C++ applications compiled to Asm.js using Emscripten. With that in mind the kind of applications that are going to target Asm.js, in the near future, are those that will benefit from the portability of running in a browser but which have a level of complexity in which a direct port to JavaScript would be infeasible.
So far most of the use cases have centered around code bases where performance is of the utmost importance: Such as in running games, graphics, programming language interpreters, and libraries. A quick look through the Emscripten project list shows many projects which will be of instant use to many developers.

Asm.js Support

As mentioned before the nightly version of Firefox is currently the only browser that supports optimizing Asm.js code.
However it’s important to emphasize that Asm.js-formatted JavaScript code is still just JavaScript code, albeit with an important set of restrictions. For this reason Asm.js-compiled code can still run in other browsers as normal JavaScript code, even if that browser doesn’t support it.
The critical puzzle piece is the performance of that code: If a browser doesn’t support typed arrays or doesn’t specially-compile the Asm.js code then the performance is going to be much worse off. Of course this isn’t special to Asm.js, likely any browser that doesn’t have those features is also suffering in other ways.

Asm.js and Web Development

As you can probably see from the code above Asm.js isn’t designed to be written by hand. It’s going to require some sort of tooling to write and it’s going to require some rather drastic changes from how one would normally write JavaScript, in order to use. The most common use case for Asm.js right now is in applications complied from C/C++ to JavaScript. Almost none of these applications interact with the DOM in a meaningful way, beyond using WebGL and the like.
In order for it to be usable by regular developers there are going to have to be some intermediary languages that are more user-accessible that can compile to Asm.js. The best candidate, at the moment, is LLJS in which work is starting to get it compiling to Asm.js. It should be noted that a language like LLJS is still going to be quite different from regular JavaScript and will likely confuse many JavaScript users. Even with a nice more-user-accessible language like LLJS it’s likely that it’ll still only be used by hardcore developers who want to optimize extremely complex pieces of code.
Even with LLJS, or some other language, that could allow for more hand-written Asm.js code we still wouldn’t have an equally-optimized DOM to work with. The ideal environment would be one where we could compile LLJS code and the DOM together to create a single Asm.js blob which could be executed simultaneously. It’s not clear to me what the performance of that would look like but I would love to find out!

Q&A with David Herman

I sent some questions to David Herman (Senior Researcher at Mozilla Research) to try and get some clarification on how all the pieces of Asm.js fit together and how they expect users to benefit from it. He graciously took the time to answer the questions in-depth and provided some excellent responses. I hope you find them to be as illuminating as I did.
What is the goal of Asm.js? Who do you see as the target audience for the project?
Our goal is to make the open web a compelling virtual machine, a target for compiling other languages and platforms. In this first release, we’re focused on compiling low-level code like C and C++. In the longer run we hope to add support for higher-level constructs like structured objects and garbage collection. So eventually we’d like to support applications from platforms like the JVM and .NET.
Since asm.js is really about expanding the foundations of the web, there’s a wide range of potential audiences. One of the audiences we feel we can reach now is game programmers who want access to as much raw computational power as they can. But web developers are inventive and they always find ways to use all the tools at their disposal in ways no one predicts, so I have high hopes that asm.js will become an enabling technology for all sorts of innovative applications I can’t even imagine.
Does it make sense to create a more user-accessible version of Asm.js, like an updated version of LLJS? What about expanding the scope of the project beyond just a compiler target?
Absolutely. In fact, my colleague James Long recently announced that he’s done an initial fork of LLJS that compiles to asm.js. My team at Mozilla Research intends to incorporate James’s work and officially evolve LLJS to support asm.js.
In my opinion, you generally only want to write asm.js by hand in a very narrow set of instances, like any assembly language. More often, you want to use more expressive languages that compile efficiently to it. Of course, when languages get extremely expressive like JavaScript, you lose predictability of performance. (My friend Slava Egorov wrote a nice post describing the challenges of writing high-performance code in high-level languages.) LLJS aims for a middle ground — like a C to asm.js’s assembly — that’s easier to write than raw asm.js but has more predictable performance than regular JS. But unlike C, it still has smooth interoperability with regular JS. That way you can write most of your app in dynamic, flexible JS, and focus on only writing the hottest parts of your code in LLJS.
There is talk of a renewed performance divide between browsers that support Asm.js and browsers that don’t, similar to what happened during the last JavaScript performance race in 2008/2009. Even though technically Asm.js code can run everywhere in reality the performance difference will simply be too crippling for many cases. Given this divide, and the highly restricted subset of JavaScript, why did you choose JavaScript as a compilation target? Why JavaScript instead of a custom language or plugin?
First of all, I don’t think the divide is as stark as you’re characterizing it: we’ve built impressive demos that work well in existing browsers but will benefit from killer performance with asm.js.
It’s certainly true that you can create applications that will depend on the increased performance of asm.js to be usable. At the same time, just like any new web platform capability, applications can decide whether to degrade gracefully with some less compute-intensive fallback behavior. There’s a difference in kind between an application that works with degraded performance and an application that doesn’t work at all.
More broadly, keep in mind the browser performance race that started in the late 00′s was great for the web, and applications have evolved along with the browsers. I believe the same thing can and will happen with asm.js.
How would you compare Asm.js with Google’s Native Client? They appear to have similar goals while Asm.js has the advantage of “just working” everywhere that has JavaScript. Have there been any performance comparisons?
Well, Native Client is a bit different, since it involves shipping platform-specific assembly code; I don’t believe Google has advocated for that as a web content technology (as opposed to making it available to Chrome Web Store content or Chrome extensions), or at least not recently.
Portable Native Client (PNaCl) has a closer goal, using platform-independent LLVM bitcode instead of raw assembly. As you say, the first advantage of asm.js is compatibility with existing browsers. We also avoid having to create a system interface and repeat the full surface area of the web API’s as the Pepper API does, since asm.js gets access to the existing API’s by calling directly into JavaScript. Finally, there’s the benefit of ease of implementability: Luke Wagner got our first implementation of OdinMonkey implemented and landed in Firefox in just a few months, working primarily by himself. Because asm.js doesn’t have a big set of syscalls and API’s, and because it’s built off of the JavaScript syntax, you can reuse a whole bunch of the machinery of an existing JavaScript engine and web runtime.
We could do performance comparisons to PNaCl but it would take some work, and we’re more focused on closing the gap to raw native performance. We plan to set up some automated benchmarks so we can chart our progress compared with native C/C++ compilers.
Emscripten, another Mozilla project, appears to be the primary producer of Asm.js-compatible code. How much of Asm.js is being dictated by the needs of the Emscripten project? What benefits has Emscripten received now that improvements are being made at the engine level?
We used Emscripten as our first test case for asm.js as a way to ensure that it’s got the right facilities to accommodate the needs of real native applications. And of course benefiting Emscripten benefits everyone who has native applications they want to port — such as Epic Games, who we teamed up with to port the Unreal Engine 3 to the web in just a few days using Emscripten and asm.js.
But asm.js can benefit anyone who wants to target a low-level subset of JavaScript. For example, we’ve spoken with the folks who build the Mandreel compiler, which works similarly to Emscripten. We believe they could benefit from targeting asm.js just as Emscripten has started doing.
Alon Zakai has been compiling benchmarks that generally run around 2x slower than native, where we were previously seeing results anywhere from 5x to 10x or 20x of native. This is just in our initial release of OdinMonkey, the asm.js backend for Mozilla’s SpiderMonkey JavaScript engine. I expect to see more improvements in coming months.
How fluid is the Asm.js specification? Are you open to adding in additional features (such as more-advanced data structures) as more compiler authors being to target it?
You bet. Luke Wagner has written up an asm.js and OdinMonkey roadmap on the Mozilla wiki, which discusses some of our future plans — I should note that none of these are set in stone but they give you a sense of what we’re working on. I’m really excited about adding support for ES6 structured objects. This would provide garbage-collected but well-typed data structures, which would help compilers like JSIL that compile managed languages like C# and Java to JavaScript. We’re also hoping to use something like the proposed ES7 value types to provide support for 32-bit floats, 64-bit integers, and hopefully even fixed-length vectors for SIMD support.
Is it possible, or even practical, to have a JavaScript-to-Asm.js transpiler?
Possible, yes, but practical? Unclear. Remember in Inception how every time you nested another dream-within-a-dream, time would slow down? The same will almost certainly happen every time you try to run a JS engine within itself. As a back-of-the-envelope calculation, if asm.js runs native code at half native speed, then running a JS engine in asm.js will execute JS code at half that engine’s normal speed.
Of course, you could always try running one JS engine in a different engine, and who knows? Performance in reality is never as clear-cut as it is in theory. I welcome some enterprising hacker to try it! In fact, Stanford student Alex Tatiyants has already compiled Mozilla’s SpiderMonkey engine to JS via Emscripten — all you’d have to do is use Emscripten’s compiler flags to generate asm.js. Someone with more time on their hands than me should give it a try…
At the moment all DOM/browser-specific code is handled outside of Asm.js-land. What about creating an Emscripten-to-Asm.js-compiled version of the DOM (akin to DOM.js)?
This is a neat idea. It may be a little tricky with the preliminary version of asm.js, which doesn’t have any support for objects at all. As we grow asm.js to include support for ES6 typed objects, something like this could become feasible and quite efficient!
A cool application of this would be to see how much of the web platform could be self-hosted with good performance. One of the motivations behind DOM.js was to see if a pure JS implementation of the DOM could beat the traditional, expensive marshaling/unmarshaling and cross-heap memory management between the JS heap and the reference-counted C++ DOM objects. With asm.js support, DOM.js might get those performance wins plus the benefits of highly optimized data structures. It’s worth investigating.
Given that it’s fairly difficult to write Asm.js, compared with normal JavaScript, what sorts of tools would you like to have to help both developers and compiler authors?
First and foremost we’ll need languages like LLJS, as you mentioned, to compile to asm.js. And we’ll have some of the usual challenges of compiling to the web, such as mapping generated code back to the original source in the browser developer tools, using technologies like source maps. I’d love to see source maps developed further to be able to incorporate richer debugging information, although there’s probably a cost/benefit balance to be struck between the pretty minimal source location information of source maps and super-complex debugging metadata formats like DWARF.
For asm.js, I think we’ll focus on LLJS in the near term, but I always welcome ideas from developers about how we can improve their experience.
I assume that you are open to working with other browser vendors, what has collaboration or discussion been like thus far?
Definitely. We’ve had a few informal discussions and they’ve been encouraging so far, and I’m sure we’ll have more. I’m optimistic that we can work with multiple vendors to get asm.js somewhere that we all feel we can realistically implement without too much effort or architectural changes. As I say, the fact that Luke was able to implement OdinMonkey in a matter of just a few months is very encouraging. And I’m happy to see a bug on file for asm.js support in V8.
More importantly, I hope that developers will check out asm.js and see what they think, and provide their feedback both to us and other browser vendors.


More Than Just Lorem Ipsum: Content Knowledge Is Power

More Than Just Lorem Ipsum: Content Knowledge Is Power:
Countless organizations now have a decade or two’s worth of Web content — content that’s shoved somewhere underneath their redesigned-nine-times home page. Content that’s stuck in the crannies of some sub-sub-subnavigation. Content that’s clogging up a CMS with WYSIWYG-generated markup.“Content matters!” “Comp with real copy!” “Have a plan!” By now, you’ve probably heard the refrain: making mobile work is hard if you don’t consider your content. But content knowledge isn’t just about ditching lorem ipsum in a couple of comps.
Messy, right? Well, not as messy as it will be — because legacy content is the thing that loves to rear its ugly head late in the game, “breaking” your design and becoming the bane of your existence.
But when you take the time to understand the content that already exists, not only will you be able to ensure that it’s supported in the new design, but you’ll actually make the entire design stronger because you’ll have realistic scenarios to design with and for — not to mention an opportunity to clean out the bad outdated muck before it obscures your sparkly new design.
Today, we’re going to make existing content work for you, not against you.

What You Don’t Know Will Hurt You

When you’re working on something new and fun, ignoring the deep recesses of content is tempting. After all, you’ve got a lot to think about already: designing for touch, dealing with ever-changing screen sizes, adding geolocation features, maybe even blinging things out with a few badges.
But if content parity matters to you (and it damn well should if you care one whit about the “large and growing minority of Internet users” who always or mostly access the Web on a mobile device), then at some point you’ll have to deal with the unruly content lurking underneath your website’s neat surface.
Why? Because chances are there’ll be stuff out there that you’ve never thought about, much less designed for. And all that stuff has to go somewhere — too often, shoehorned into a layout it was never meant to inhabit, or perhaps not even migrated into a new template but instead left to wither in an outdated, mobile-unfriendly design.
Take navigation. As Brad Frost has written, designing small-screen navigation for small websites is simply tricky, any way you slice it.
Hard as it already is, it becomes downright impossible if you haven’t dealt with your legacy assets first. You’re sure to end up with problems, like a navigation system that only works for two levels of content when you actually have four levels to contend with, making all of that deeper information accessible only with hard to manage (and find) text links — or, worse, making it completely inaccessible except through search.
There’s a better way.

In The Belly Of The Beast

Mark Boulton has written eloquently on content-out design — the concept of determining how your design should shift for varying displays by focusing not on screen sizes, but on where your content naturally breaks down. It’s excellent advice.
But if you’re trying to work with a website with thousands of URLs — or anything more than a few dozen, really — you have to ask: Which content do I design with? Unless you’re relying on infinite monkeys designing infinite layouts to create custom solutions for every single page, you’re going to have to rely on representative content: a set of content that demonstrates the variety of information that the experience needs to support.
So, how do you know what’s representative? You get your arms around the size, scope, structure and substance of your content.
Yup. It’s time for the content audit.
People have been talking about content audits and inventories for more than a decade — in fact, Jeffrey Veen wrote about them on Adaptive Path back in 2002, calling them a “mind-numbingly detailed odyssey through your web site.” At the time, people were starting to yank their websites from static hand-coded pages and pull them into content management systems, and someone needed to sit down and sort it all out.
More than a decade later, I’d say content audits are more useful than ever — but in a slightly different way. Today, a content audit isn’t just an odyssey through your website; it’s a window into your content’s nature.

What To Look For

You could audit content for all kinds of things, depending on what you want to learn and be able to do with the information. Some audits focus on brand and voice consistency, others on assessing quality or identifying ROT.
There’s nothing wrong — and quite a lot right — with these priorities. But if you want to ready your content to be more flexible and adaptable, then you can’t just look at each page individually. You need to start finding patterns in the content.
It’s a simple question, really: What are we publishing? If your first answer is “a page,” look again. What’s the shape of this content? What is this content most essentially? Is it an interview, a feature story, a product, a bio, a recipe, an erotic poem, a manifesto? Asking these questions will help you see the natural pieces and parts that make up the content.
When you do, you’ll have a structural model for the content that matches your users’ mental model — i.e. the way they perceive what they’re looking at and how they understand what it means.
For example, I recently worked with a large publicly traded company whose website dates back to the early aughts. After a couple of responsive microsites, they’ve caught the bug and want to update everything. Problem is, the existing website’s a mess of subdomains, redirects and thousands of pages that are nowhere near ready for flexible layouts.
Our first step was to dig deep, like a geologist — except that instead of unearthing strata of shale and sandstone marking bygone eras, we identified and documented all of the forgotten templates, lost content and abandoned initiatives we could.
We ended up with a dozen or so content types that fit pretty much anything the company was producing. Sure, we still ended up with some general “pages.” But more often than not, our audit revealed something more specific — and useful — about the content’s nature. When it didn’t, that was often a sign that the content wasn’t serving a purpose — which put it on the fast track to retirement.
Once you’ve taken stock of what you have, gotten rid of the garbage and identified the patterns, you’ll also need to decide which attributes each content type needs to include: Do articles have date stamps? Does this need a byline? What about images? Features? Benefits? Timelines? Ingredients? Pull quotes? This will enable you to turn all of those old shapeless pages — “blobs,” as Karen McGrane has so affectionately labeled them — into a system of content that’s defined and interconnected:
A content model for a recipe

This content model shows attributes for the “recipe” content type, and how recipes fit into a broader system.
Each bit of structure you add gives you options: new abilities to control how and where content should be presented to best support its meaning and purpose.
Regardless of what you want to do with your content — launch a responsive website, publish to multiple websites simultaneously, extract snippets of content for the home page, reuse the content in an app, mash it up with a third party’s content — this sort of structure will make it possible, because it enables you to pick and choose which bits should go where, when.

Tools for Auditing Content

The content audit may not be new, but some tools to help you get started are. Lately, I’ve been running initial reports with the Content Analysis Tool (CAT), which, for a few bucks, produces a detailed report of every single page of content that its spiders can find across your website.
Using CAT’s Web interface, you can sift through the report and see details such as page types, titles, descriptions, images and even the content in <h1> tags — super-useful if you’re assessing content of murky origin, because a headline often gives you at least a glimmer of what a page is about.
Here’s an excerpt of what it found for Smashing Magazine’s own “Guidelines for Mobile Web Development” page:
An excerpt from the Content Analysis Tool

The CAT report shows a thumbnail of the page, as well as some data about its content. See the full screenshot for more.
While features such as screenshots of all pages and lists of links are useful for individual analysis, I prefer to export CAT’s reports into a big ol’ CSV file, where the raw data looks like this, with each row of the spreadsheet representing a single URL:
An excerpt of a raw CSV report from the Content Analysis Tool
CAT also spits out detailed CSVs chockfull of raw data about all pages of a website. See the full screenshot for all of the fields.
It’s not perfect. For example, if content’s been abandoned and removed from navigation but left floating out there in the tubes, CAT typically won’t pick it up either. And if a website’s headlines aren’t marked up using <h1> (like Smashing Magazine, which uses <h2>s), then it won’t scrape them either.
What it is great for, though, is getting a quick snapshot of an entire website. From here, I usually do the following:
  • Add fields for my own needs, such as qualitative rankings or keep/delete notations;
  • Set up filtering and sorting so that I can slice the data by whichever field I want, such as according to the section where it’s located;
  • Assess and rank each page according to whatever qualitative attributes we’ve settled on;
  • Note any patterns in the content types and structures used, as well as relationships to other content;
  • Define suggested meta-data types and tags that the content should have;
  • Use pivot tables, which summarize and sort data across multiple dimensions, to identify trends in the content.
With this, I now have both the detailed information to drive specific page-level changes and the high-level patterns to inform structural recommendations, CMS updates, meta-data schema and other efforts to improve content portability and flexibility.
I like using CAT because it was designed by and for content strategists — and improved features are rolling out all the time — but you can also use a similar tool from SEOmoz (although it tends to sell you on fancy-pants reporting features), or even grab a report from your CMS (depending on which one you use and how it collects information).
Any of these tools will help you quickly collect raw data. But remember that they’re just a head start. Nothing replaces putting your eyes — and brain — on the content.

The Secret To Scale

You don’t have to love auditing content. You certainly don’t need to develop a sick addiction to pivot tables (but it’s totally OK if you do). What you will love, I promise, is what a deep knowledge of content enables you to do: create an extensible design system that doesn’t devolve at scale.
For example, let’s look at some of the larger websites that have started using responsive design. There’s higher education, of course, where early adopters such as the University of Notre Dame were quickly followed by a rash of college websites.
What do most of these websites have in common? Two things: a lot of complex content and a responsive system that carries through to only a handful of pages, like the UCLA’s website, where the home page and a few key pages are responsive, but the deeper content is not:
UCLA’s responsive home page and non-responsive admissions page
UCLA’s home page is responsive, but most of the website, like this landing page, is not. Larger view.
Why doesn’t that design go deeper? I’d bet it’s because making a responsive website scale takes work, as Nishant Kothary summed up brilliantly in his story of Microsoft’s new responsive home page from late 2012:
“The Microsoft.com team built tools, guidelines, and processes to help localize everything from responsive images to responsive content into approximately 100 different markets… They adapted their CMS to allow Content Strategists to program content on the site.”
In other words, a home page isn’t just a home page. You have to change both your content and the jobs of the people who manage it to make it happen.
But one industry has had some luck in building responsively at scale: the media — including massive enterprises such as Time, People and, of course, the Boston Globe. These organizations manage as much or even more content than Microsoft and universities, but as publishers with a long history of creating professional, planned, organized content, they have a huge leg up: they know what they publish, whether it’s editorials or features or profiles or news briefs. Because of this, everything they publish fits into a system — making it much easier to apply responsive design patterns across all of their content.

Making Tough Choices

When you start breaking down your big, messy blobs of content and understanding how they really operate, you’ll realize there’s always more you could do: add more structure, more editing, more CMS customization. It never ends.
That’s OK.
When you understand the realities of what you’re dealing with, you’re better equipped to prioritize what you do — and what you choose not to do. You can make smart trade-offs — like deciding how much time you’re willing to invest now in order to have the flexibility to do more later, or what level of process change the current staff can handle versus the amount of flexibility you could use in the content.
There are no right answers. All we can do is find the right balance for each project, team and audience — and recognize that some structure is going to serve us a whole lot longer than none will.

Everyone’s Job

I get it. Going through endless reams of content ain’t your thing. You’re a designer, a developer, a project manager, damn it. You just want to get on with it, right?
We all do. But the more you seek to understand your content, the better your other work will be. The less often your project will go off the rails right around the time it’s supposed to launch. The fewer problems you’ll have with designs that “break” when real content gets inputted. The more the organization will be able to keep things in order after launch.
Best of all, the more your users will get the content they need — wherever and however they want it.
Thanks and credits go to Ricardo Gimenes, for preparing the front page image.

© Sara Wachter-Boettcher for Smashing Magazine, 2013.