Archive for January, 2014

Reading List

Regions to be cheerful?

The Web has been buzzing with the news that Blink has said “nope” to Adobe’s CSS Regions spec. Here some hand-picked links about that, and the general state of CSS Regions spec.

Other suggestions for text fragmentation are available, such as CSS Overflow Module Level 3, CSS Fragmentation Module Level 3 and (related) CSS Figures.

Standards

  • HTML is too complex – “unless there is an immediate visual or behavioural benefit to using an element, most people will ignore it”. Some real food for thought in this sentence.
  • A World Managed By Apps Is Closed For Those Without A Smartphone – “Every time you make a service or device that can only be managed from an app, you are basically adding to a systematic poor tax. You make it easier for those comfortable, with great smartphones in their hand, to get shit done, while not spreading that benefit to those without the magic box. You deepen economic entrenchment.”
  • Why is Progressive Enhancement so unpopular? by fellow old-timer, Drew McLellan
  • [HTML Imports]: Sync, async, -ish?. If you’re importing a web component (you hypercool ninja, you) should the whole render be blocked until the component can be drawn? Of course not, argues Jank Archibald persuasively. Elsewhere he writes, “Instead, we should follow the default behaviour of <img>. An <img> doesn’t block parsing or rendering, the image appears when it is loaded. The developer can reserve an area for the image to change the reflowing behaviour.” Not as daft as he looks, that Jake.
  • Why does this spec replicate HTML features? – Imagine we have a manifest file for web apps, that pretty much duplicates what HTML can do with <meta> elements already. If both are present, which should “win”?
  • Nine Things to Expect from HTTP/2 by Mark Nottingham, chair of the IETF HTTPbis Working Group. So he knows.
  • will-change: a CSS hint to browser that you know its appearance will change, so it can make any optimisations, eg paint it to another layer immediately for faster animation. Replaces hacks like translateZ(0) and -webkit-backface-visibility: hidden.
  • Pointer Events Progress: Mozilla and Blink Communities Reach a Significant Engineering Milestone – synergies between Microsoft, Blink and Mozilla. Who’s missing, I wonder?

Scampi Bug Yeti

As Mike Taylr points out, “Scampi Bug Yeti” is an anagram of “Spec Ambiguity”. Sometimes, people moan that the specification process of web standards is grindingly slow and laborious with too much detail etc. But the reality is that specs that are amibiguous or not tightly defined cause problems with interoperability. For example,

Reading List

Articles that I’ve tweeted this week. Not necessarily fact-checked, or endorsed.

Standards

Browsers, techniques, resources

I’m taking a break to get some sun and even a bit of culture. Blog comments are disabled to stop comment spam. See you in a little while. XXX

Reading List

(Last Updated on 9 January 2015)

Thoughts on monetising user data

Aral Balkan asked me to “cut to the chase, Bruce: do you find anything wrong with the business models of Facebook & Google (monetising data)?”

It’s something I’ve been thinking a lot about, but it needs more than 140 characters, so here goes. Note that these are my personal opinions. I work for Opera, which has business relationships with Google, Facebook, and its own advertising arm of the business.

But I also use Google and Facebook services privately so have my own views as a user; again, these are my opinions, not those of my employers.

I work on the web, but at home on my own, so I use Facebook and Twitter a lot. Not only is it useful for discussing work, but it’s my “watercooler”. I don’t mind that the personal stuff I write is publicly available, although I keep my location secret and no longer put the names of my kids online. (Facebook stuff isn’t public. I only really use it as it’s where non-geek real-life friends are.)

I don’t much mind that Google tracks my searching habits around the Web (although I would pay money not to have to watch Treehouse Woman again on YouTube, because she’s too shinyhappy, and puts her coffee down on a wooden surface without using a coaster).

The annoyance I find is offset by the fact that I understand why they do this; it’s how they make money to support the services I use for free, which are primarily Search, Gmail and YouTube. (I get no benefit from Google+.)

In short – I understand that “I am the product being sold”, and am OK with that. Similarly, I’m fine with getting tailored money-off vouchers for products that I use, sent to me by supermarkets who know what I use because they monitor it. I opt in, because I see value in that. You may not; that’s fine.

As long as the companys’ privacy settings are both clear, and honoured by the company, I don’t see this data gathering and data mining as inherently intrusive. I’m not sure that all companies privacy settings are sufficiently clear, however; I read a case study some years ago in which a good-sized sample of people were asked what privacy settings they had on their social networking, and it was compared with the actual setting – very few matched. The Facebook Android app permissions are certainly opaque.

Perhaps companies that do monetize data could make their privacy settings more transparent, and be even more obvious that the price of free is your data. But I think the latter is pretty obvious to those who give it a little thought; we can’t always handhold stupid people. There should certainly be a simple method to delete all one’s data and history from public view, and which will be removed from the company’s server/ archive within a defined period of time.

What annoys me most is when people or organisations use my data without my permission. For example, a few years ago, my wife had a minor car accident. Somewhere in the chain of insurance company, loss adjusters and repairs garage, our phone number was given to an unauthorised third party and occasionally I receive a phone call from a call centre trying to sell me “no win, no fee” ambulance-chasing legal services.

But beyond annoyance, what alarms me is secretive State intrusion into my life through my digital tracks. I assume that all companies – whether a supermarket loyalty scheme or a social network – regularly comply with warrants from law-enforcement agencies going about their legitimate work.

Let’s assume that the social networks and search engines, as they claim, don’t just hand over all their data to the governmental snoops. It then seems to me that, unless they’ve been fantastically lax with their security – which is certainly possible, but unlikely, given that it’s their core cash-generating asset – they can’t be blamed for the actions of the government.

We know from Edward Snowden that some companies’ data is just wholesale hacked by NSA, GCHQ and other state bodies. The legality of this is being debated in courts at the moment. The morality of this is clear (to me): it’s wrong. “If you’ve nothing to hide, you’ve nothing to fear” is the refrain of the KGB, the Gestapo and every despot across the globe.

Government intrusion isn’t new. When I was a teenager, I joined a communist party. My letters from them were always opened (and no others). Presumably, this was done actually by the UK Post Office on police orders – that is, complete collusion, even thought there was no warrant or reason to fear an idealistic but naive 17 year old. It’s also long been rumoured that the voting slips of all UK communist voters were cross-referenced against their counterfoils and the names of communist voters given to Special Branch and MI5.

In short, to answer Aral’s question: I don’t feel that commercial organisations using data that I’ve opted to provide them, for the purposes they said they’ll use if for, is wrong. It’s part of modern capitalism, which contains plenty I have to hold my nose about, but that’s a much longer blog post which I can’t be bothered to write.

The worrisome aspect is states illegally stealing our data from those companies, and putting us under constant surveillance, justified by keeping us safe from this year’s bogeymen.

But those same social networks and web companies allow us to share information on what they’re doing and organise in order to protest against it. The tension between individual liberty (I believe privacy is an integral part of liberty) and state control is not new. The threat may be greater because of technology, but the platform to fight it from is greater, too.

(I invited Aral to respond to this but as yet there’s no reply.)

(Last Updated on 17 September 2015)

I met the TAG

Last night I dragged my carcass down to London in order to meet the W3C Technical Architecture Group (TAG). This is the group that advises other W3C working groups on architectural matters – most notably (for Web developers) API design.

Co-chair Dan Appelquist blamed me for this event; after the inauguaral Meet the TAG last June, I suggested that follow-up events be more structured and have a public Q&A, if only so the TAG Team didn’t have to answer the same questions repeatedly as they mingled.

On stage, but not expecting the Spanish Inquisition, were Anne van Kesteren (Mozilla), Sir Tim Berners-Lee (W3C Director and Olympics opening ceremony eyecandy), Alex Russell (Google), Yehuda Katz (Ember.js, Ruby on Rails and jQuery Core Teams) and Dan Appelquist (Telefonica). Other taggers in the audience were Peter Linss, Dame Jeni Tennison, Henry Thompson and Sergey Konstantinov.

Anne introduced the TAG as saying that it attempts to ensure that W3C APIs are designed in adherence with some core principles. I asked what those principles actually are. The reply (mostly from from Alex Russell, Tim BL and Yehuda Katz) was that many older APIs don’t feel particularly webby; that’s because they were generally designed by those who code browsers in C++, and C++ isn’t the same as JavaScript. As we’ve progressed, we’ve generally got better but there are still inconsistencies and weirdnesses from time to time.

We have lots of high-level APIs but we need to get to what’s underneath. For example, every browser has image decoders (that turn PNG, JPG, GIFs into bitmaps) but how can we access them? We can’t. Where is the API that allows us to tell an <img> element to defer loading? There isn’t one.

So we need to do what Alex Russell called “archeology” – define each layer in terms of a lower layer. Yehuda used HTML5 appcache as an example; it does a whole bunch of things but, if those turn out not to be what you want, you’re stuffed. This is why Service Workers were invented, and it’s important to note that it’s possible to write AppCache’s higher-level functionality in ServiceWorkers. This layering is described in the Extensible Web Manifesto (which isn’t a TAG document, but which is signed by many TAG members as well as the glitterati of the standards world).

Tim BL pointed out that access to very low level was common in native app programming languages, and becoming so on the web but more parity is needed. The fact that we’re now calling it “Web Platform” is more than a marketing-led rebrand.

Then followed an unedifying discussion about DRM – Digital Rights Management – or Encrypted Media Extensions as the extension to HTML is delicately called. (Contrary to unpopular belief, it’s not part of Core HTML.)

I say it was unedifying because it has nothing to do with TAG. But as few people have opportunity to ask Tim BL questions, it was a chance for them to ask the director of W3C about it. Trouble is, it’s a discussion that goes nowhere. If TAG’s disapproval of EME could abolish DRM from the world, I’d be all for discussing it (although I’d prefer they focus their new superpowers of abolition on hunger and war). But whether TAG likes it or not, Big Hollywood will implement DRM anyway.

Then I went for a pee and missed a bit about making up your own tags with XML (and/ or RDFa) is evil, but making up your own elements with Web Components is great. Anyone else catch the detail of that part? (Update: Jeremy Keith did.)

Jo Rabin suggested that it was incorrect to see the web as being for browsers only (for Web Component require JavaScript). Alex (from Google) countered that even inside Google, the search engineers aspire to make the Googlebots see the web as we humans do – that is, through browsers – because web pages are written by people, for people.

One thing everyone agreed on was the we all love URLs (or URIs, as Tim called them; is there a difference?).

All in all, it’s good to see TAG becoming a proxy for web developers, ensuring that APIs are sane for JavaScripters. It’s refreshing that they’re open for Q&A, too. Thanks to them, and Google Campus for hosting.

Reading List

Closing the gap between web and native

Standards

  • The picture Element Editor’s Draft, 2 January 2013. Lo, <picture> rises from the flames like a phoenix. This re-written spec combines the best bits of srcset and src-n in a webby markup syntax. The most important difference from “old <picture> is that the <source> elements control the src of the <img> element; thus, <img> is an integral part of the construct rather than simply fallback (and thus is unlikely to be omitted by authors, so “old browsers won’t be left out.)
  • Input Method Editor API – interacting with virtual keyboards, handwriting pads etc. Particularly useful for Chinese/ Japanese/ Korean.
  • What is the DOM? – a beginner’s guide from Chris Coyier
  • We’re About to Lose Net Neutrality And the Internet as We Know It by A Lawyer
  • REMs, fallback and support by Stu Robson. TL;DR – add one line of CSS and make sure your font-sizes work everywhere

Industry

Phwooar