Log
For personal space we need personal cyberspace
Thanks to Anna Patalong for posting the Guardian article that got me thinking about this topic.
The internet is famously a refuge for human horrors of all kinds, where whatever your pet hate or perversion you can find like-minded people with whom to celebrate and practise it; but the common image of out-of-the-way websites and password-protected forums only applies to the most egregious examples, the tabloid fodder.
Most of the hate, the quotidian racism, sexism and general denigration of the unfamiliar and uncomfortable is in plain sight, and flourishes in the most anodyne, homogeneous, controlled environments. Those on the receiving end are well aware of it, but few realise exactly what the problem is.
After all, Facebook deletes the nasty stuff, right?
In the Guardian article linked to above, the activist Soraya Chemaly hits the bull on the horns: “It’s not about censorship in the end. It’s about choosing to define what is acceptable.” Indeed, it is about censorship, and that is precisely choosing to define what is acceptable. The problem is that acceptability is defined by the mores of the majority: the heavy hand of the Dead White Male decrees that mere depictions in cake of human sex organs are an abomination and must be suppressed, while groups celebrating rape culture are fine. Facebook claims that “it’s not Facebook’s job to decide what is acceptable”, but by removing anything they have already done so: it’s the removal of the labia cupcakes that legitimises the rape jokes.
This particular imbalance is likely on any service paid for by mainstream advertisers; but imbalance is inevitable in any centralized service: even if Facebook removed nothing of their own accord, they would be subject to local law, and hence two major sources of take-down: law enforcement agencies (with mixed results: goodbye child pornography, goodbye political activism) and corporations (goodbye anything that may infringe our IP). Facebook, in other words, is what it’s like living in a privatized country.
We can certainly work to change Facebook’s idea of what is acceptable, but its centralized control will always tend to conservative uniformity. The physical world, full of iniquity at worst and compromise at best as it is, is much more nuanced, and crucially tends to have the property that the more personal the space, the greater the control we have of it. Interior decoration is more or less up to the inhabitant, but Facebook does not hesitate to censor our personal profiles.
Here then is a new reason to support personal computing initiatives, like the FreedomBox, along with privacy and control of our private data: we’ve unwittingly ceded the very construction of society to the cloud, and I fear that if we don’t take it back, then increasingly the gains we’ve made in physical society will not only be slowed and blunted, but reversed in the new domain that was supposed to be the freest space of all.
Wed, 10 Apr 2013 22:14:30
Time for Royal Mail to become Royal Email
A curious train of thought started as I was on the way to a local bookshop to buy a mother’s day card. I buy most of my greetings cards in this bookshop, but I’ve never bought a book there in the two and a half years I’ve lived in the area; I don’t buy many books, and mostly I buy through Amazon.
I say “through” because I find that most of the books I buy come from Amazon Marketplace sellers, not directly from Amazon, which is why I’m not sure that Amazon is altogether a Bad Thing: through their marketplace I have access to many small sellers, and it’s far from obvious that a foreign company operating an internet marketplace should be expected to pay UK taxes on that part of their profits. (I agree that they should do so for their direct UK trade.)
Further, I sell books through Amazon myself occasionally, something that I’d never managed to before without the combination of convenience and huge potential market offered for the small range of somewhat rebarbative volumes I offer. But it still feels wrong: the system is biased in favour of large sellers, most obviously, Amazon themselves. Pitting tiny booksellers against each other across the country seems like a way to make everyone miserable, even as it enables the really small sellers (like me!) to get into the game at all.
A healthier system would be one that encourages localism. A monetized version of Freecycle? I might use such a system if it enabled me to make a little more on my sales by not having to pay postage, though I’d want to be able to post inventory data just once: locally and to Amazon.
But then it struck me that the reason nationwide Amazon Marketplace works is that one can send books anywhere in the country for the same amount: Royal Mail’s universal delivery obligation, more or less. Designed to give access to national life for everyone, however physically isolated, this distortion of the market also tips the scales in favour of large organisations.
But it’s letters that are really important to social inclusion, not parcels, and increasingly we’re using the internet to communicate. What if internet access were gradually substituted for universal delivery as the Royal Mail’s obligation? Universal high-speed broadband access has been a political “priority” for some time now, while delivery has been slow. If over the course of a generation (to allow for education, so that access really is universal: there are still far too many people who are not internet-literate to make this fair today) we made this substitution we could complete this important infrastructural investment, at the same time removing the artificial distortion to the physical landscape and thereby encouraging local trade. The infinitely malleable online markets would quickly adapt to postage-per-mile. There would need to be exceptions, for example, for sending medicines by post, but this could be charged to the agency with the relevant obligation, in the case of medicine, the NHS, thus rendering the argument over and justification for different cases more transparent than at present.
Why give the Royal Mail this job, which has little to do with their current function or expertise? Because in fact it has everything to do with it: the function is a social one, the same as currently discharged by the Mail. The current fashion in the public sector when desiring to regulate a sector in the public interest is to set up a commissioning quango to buy in services from private sector organisations with the relevant technical competence (hopefully). This seems to me cart before horse when the mission is a social one. The Royal Mail and closely allied Post Office have decades of successful experience in delivering universal services (banking, benefits, bill paying…); this is “just” another one. Also, it seems hugely wasteful to spend a generation winding down one service while ramping up another that, modulo technology, is very similar in shape: a last-mile communications service.
It’d be good to have a financial incentive to actually buy books from my local bookshop.
Fri, 08 Mar 2013 17:02:39
The reason I’ve not heard of your cool language is because it’s non-free
Dear developer/researcher/company, I’ve just found your cool language. It might be recently announced, or it might’ve been lurking on the ’net for years. I read your page about it, and I went to download it. Oh dear, it’s non-free. Maybe I’ll try it anyway, though I won’t be using it for all the obvious reasons. Still, that explains why I’ve never heard of it until now.
One thing that a lot of developers seem to overlook is the sheer inertia that any non-free program has to overcome. If your program is free, a lot of people who wouldn’t otherwise use it will do so, and there’s a good chance of its getting into free software distros (which, by the way, are not just for free OSes. Free software is also much more likely to spread via non-free products, hardware or software. All that goes double for language implementations, because issues of portability, maintenance and licensing are even acuter when you’re making an investment in writing code.
So increasingly I conclude that language authors who insist on non-free licensing Just Don’t Get It. There’s one big exception: if your language is secret sauce, then hoarding it to try to make a fortune is at least rational. An example is K/q, which appears to have done very nicely for its author, who also wisely chose as his market the financial sector, in which your clients are likely to be less affected by the factors mentioned above: they’ll have big budgets, and be writing code they don’t expect to distribute, and which may well not have a long shelf-life.
Otherwise, the landscape is littered with duh. Until a few years ago I excused those who had perhaps not fully understood, or even predated, the internet-mediated explosion of free software (though there are plenty of examples of earlier generations who understood the value of freedom as well as anyone, such as Donald Knuth, whose typesetting languages TeX and Metafont dominate mathematical typesetting, and have maintained a large presence in many technical fields for over 30 years), but no longer. Some authors eventually see the light: Carl Sassenrath, author of the Amiga OS, eventually freed his intriguing REBOL language after 15 years of getting nowhere (and, as far as I can tell from reading between the lines, running out of money). Development now seems to be at least trickling along, and from reading the commit logs, he’s just reviewing, not writing. Other projects are almost too late: StrongTalk, a SmallTalk variant with static typing, was freed by Sun in 2006, ten years after the developers were acquired by Sun, and since then, no-one seems to have taken it on, and this still 20-year-old code is languishing, probably never to be relevant. Then there are “close-but-no-cigar” efforts such as that of Mark Tarver, whose Lisp descendent, Qi, was proprietary, but who almost learnt the lesson with its successor, Shen, except that he forbade redistribution of derivative works which do not adhere to his spec for the language, rather than simply requiring that such derivatives change their name.
Stupidest of all, however, are the academic projects that suffer the same fate. Until around 1990, code was only a by-product: academics mostly wrote papers, and it was the published papers that contained the interesting information; you could recode the systems they described for yourself, and since programs were short and systems short-lived and incompatible, that was fine. Now, research prototypes often involve significant engineering, and unpublished code is wasted effort. It’s incomprehensible that publicly-funded researchers are even allowed not to publish their code, but some don’t. The most egregious current example is the Viewpoint Research Institute whose stellar cast, among them the inventor of Smalltalk, Alan Kay, is publishing intriguing papers, but only fragments of code: it’s as if he’s learnt nothing from the last 40 years.
I really hope this problem will die with the last pre-internet generation.
Wed, 27 Feb 2013 23:57:13
Raspberry Pi: enough to go round?
Raspberry Pi’s most important achievement so far is in generating considerable publicity. The genius of its marketing is that it appeals to the current generation of tech journalists, who were raised on the 1980s home computers whose spirit it invokes. Whether it will appeal today’s children however is less obvious, and arguably more important. Having the inner working exposed (unlike home computers) should help, though it also exposes a serious failing (more on that later).
What measure of success?
How successful it is likely to be depends largely on what you think the problem is.
Reproducing the past
Raspberry Pi’s founder, Eben Upton, defines the challenge as getting a pocket-money-priced computer suitable for teaching children to program mass-produced. The R-Pi meets that definition for the sort of middle-class household that boasts a spare HDMI-compatible monitor plus an old mouse and keyboard and offers generous pocket-money, but elsewhere failing to count input devices and a display in the cost seems disingenuous, and an all-software solution that ran on just about anything, including phones and old PCs, and could be freely downloaded, would seem nearer the mark.
Upton would reply that merely adding “an app for that” doesn’t invite the child to program as the old home computers did (when you switched them on you were immediately presented with a programming environment). Compare this with games consoles and PCs: you can program them, but by default they offer games or an office desktop respectively.
More seriously, many parents quite rightly lock down the computers their children use to prevent their visiting undesirable web sites or installing new software, or even insist on their being supervised: forbidding conditions under which to nurture the sort of exploratory play by which we all learn to love programming. A separate device which belongs to the child, contains no sensitive parental data, and can’t go online addresses all these problems, and the child can be left alone with it as safely as with a book.
Rebuilding the workforce
So far, so good: we’ve recreated a small corner of the 1980s, and a small self-selecting segment of relatively privileged children will have a chance to become programmers. But we already need far more programmers today than when the children of the 1980s entered work, and we’ll need even more when today’s children grow up. To make up the shortfall, programming needs to go mainstream.
This is a challenge that’s already being met locally in many areas; Upton’s approach is to reach out to children directly via programming competitions (or “bribery” as he calls it); although this approach might work without substantial involvement by schools, it seems unwise not to make a serious push for inclusion in the school curriculum.
Remaking society
I believe, however, that programming is far more important and central a skill for the modern world than even its most ardent industrial cheerleaders suggest. Being a non-programmer today is like being illiterate two hundred years ago: it’s possible to get by without understanding anything about programming, but you end up relying heavily on others.
It’s a subtle point, because it’s rare that one needs to actually read or write code; rather, one needs to understand how computers work because increasingly they are embedded in, and hence govern, the systems we use to organise our lives.
Many competent and confident users of computers are reduced to impotent gibbering by machine malfunction, because learning how to operate a computer gives one very little insight into how they fail, whereas understanding bugs and other failures is central to learning how to program. It’s as if the person who could help you repair your blender is the one you’d ask how to cook a soufflé, or as if the person best able to navigate a car was a mechanic.
(Why computer systems are like this is a fascinating question whose answer involves the immaturity of the technology, its complexity, and the degree to which interface and systems design is still driven by technical rather than human considerations, but one I can’t elaborate on further here.)
Even more important is the mindset underlying programming: programmers, like scientists, believe that systems have rules which, if they can’t be looked up (“reading the source code”) can be discovered and codified (“reverse engineering”). But programming has an additional, empowering belief: that rules can be changed or replaced. In a society that is increasingly rule-bound and run by machines, a programmer’s mindset offers both the belief that things can be improved, and the tools to change them. That is why it’s essential that every child should understand at least the principles of programming, even if they never read or write a line of code as an adult.
Scaling up
Hence, it is necessary that programming become part of the core school curriculum, and it will be a good sign that it is embedding itself in our culture when it becomes so. Raspberry Pi has three major problems here: the hardware, the software, and connectivity.
Seeing to the bottom
The problem with the hardware is optically obvious, because of R-Pi’s lack of external casing: it’s entirely closed. You can see the components, but you can’t take it apart to see how it works, or modify it in any way. This is partly a result of the nanometre scale on which modern electronics is built, but it’s also caused by the increasingly draconian intellectual property régime under which we suffer. Unfortunately, the beating heart of the R-Pi, a Broadcom SoC (“System on a Chip”), is a prime example of this.
Even more unfortunately, it’s hard to see how anything like the R-Pi could be built without such regressive technology (in this case, via a special help from Broadcom that Upton, as an employee, managed to secure). All this means that the R-Pi is not only little use in firing the imagination of the next generation of hardware engineers (just as sorely needed, if not as numerously, as the software kind), but its hardware reinforces the “black-box, do not touch” mentality that its software is trying to break down.
Programming for all
Unfortunately, the programming environments provided, although open, are the standard machine-first arcane languages and tools that adults struggle with. Why not use something like Squeak Etoys, which is based on decades of research in both programming and teaching programming? (The plurality is part of the problem too: the R-Pi offers distracting choice, unlike old home computers which simply dumped you into their one built-in programming environment.) Fortunately, this is easy to fix: just update the software shipped with R-Pi.
Changing the world, Learning together
The final problem, connectivity, is a subtler one. Above, I mentioned that an advantage of giving a child their own device is that it need not be connected to the internet, and hence can be safe for them to play with unsupervised. But the R-Pi lacks other sorts of connection that are important. First, it can’t affect the world physically (though peripherals attached to it could). While the privacy and absolute power one enjoys in the virtual world inside the computer is exhilarating and empowering, children also love toys that have real world effects, and it’s an important aid to the imagination to see that one’s electronic creations can have direct physical outcomes.
The Logo systems of the ’70s and ’80s had a natural real-world extension in the form of drawing “turtles”; today we have Lego Mindstorms, but they’re expensive, and only partly open. What we need is a RepRap for children. Secondly, children want to play with each other; their computers should be able to network too. The One Laptop Per Child machines do this; R-Pis should be able to too (and again, fortunately, it’s mainly a matter of software).
Feeding the five million
In summary, Raspberry Pi is, closed hardware aside, a great platform that could help catalyse a much-needed revolution in the perception of programming. The good news is that the remaining technical steps are in software, and can be taken without the heroic step of re-mortgaging one’s house, as Upton did to fund R-Pi. The bad news is that the rest of the job is social, and hence much trickier to achieve than a bank loan.
Wed, 02 May 2012 17:07:41
Computing can’t be left to teachers and business
Today the education secretary, Michael Gove, announced an overhaul of the ICT curriculum. This is good news and long overdue; having recently been castigated by the great, the good, and Google for our poor ICT teaching, the government has responded and is launching a campaign to overhaul the way ICT is taught: out with word processing and spreadsheets, and in with programming.
So, I should be happy: mission innocents saved accomplished? Sadly not; apart from the natural wariness of any “major government initiative”, this one falls down in two important ways.
First, Gove made his big announcement at an education industry gathering, BETT, and made several references to the importance of industry, both as determining what skills should be taught, and as partners to help teach them. In some vocational subjects, this makes sense, but ICT is a compulsory part of the core curriculum. It is not the function of education to prepare workers for business, and businesses are neither interested in nor competent to decide how to educate people. There’s a very obvious sense in which this is the case as far as ICT goes: children must be educated for life (even if the rhetoric of continuing education bears full fruit, adults simply cannot learn as children do), while the ICT skills that business demands change every few years. So here, as in traditionally academic subjects, we should view any industry involvement with the scepticism (and, dash it, cynicism) that it deserves.
Secondly, the announcement essentially removes ICT from the National Curriculum (the Whitehall-speak is “withdrawing the Programme of Study”). There are positive noises about supporting teachers with actual money, alongside the usual guff about liberating them, but the government are still washing their hands of responsibility of what is now the most important subject taught in schools.
As the culture minister, Ed Vaizey, understands, knowledge of how computers work is now as fundamental as literacy. It’s too basic and important to leave unsupervised even if, on the other hand, it’s so new and changing so rapidly that Gove is correct in saying that a traditionally-written curriculum “would become obsolete almost immediately”.
But the elements of computing do not change so rapidly, and they are the important bit. In the mid-’90s the undergrad course on computation theory I attended was thirty years old, and it was just as relevant and up-to-date as it had been when it was written. Many of the computer languages and operating systems in use today are at least as old, as are almost all of the concepts on which they are based.
And although many of the elements have been with us for decades, they are only now becoming fundamental to our society in the same way as literacy and numeracy. Very few people have any idea what that really means. Two crucial points need to be made: that everyone needs to learn how to program, not just programmers; and that programming is not just about computers, just as literacy is not just about speaking, reading and writing. The programming mindset can transform one’s world-view, and, like literacy, it’s particularly empowering, as it brings not only an understanding of how to decompose problems and invent rules to solve them, but the sense that the rule systems which govern our society are software, and can be changed.
Working out how to get all that across will certainly be aided by freeing teachers to experiment. Championing the process while capturing and disseminating best practice and embedding it in our culture needs central leadership. This is a far from unpromising announcement, but it’s only the beginning of the cultural shift we really need.
Wed, 11 Jan 2012 19:38:44
Old-hat futurism
I had not previously heard of Ben Hammersley, but he says he “helps people understand the modern world.” Recently, he gave a speech to the IAAC (Information Assurance Advisory Council), and the tone was very much “you are all living in the past”. He makes some excellent points in the second half of his talk, about how security theatre is widely seen by the public as an oppressive sham, and how it’s no longer acceptable for leaders to be proud of their technical incompetence, but the first half is both out of date (worrying for a self-described futurist) and out of kilter (worrying for someone supposedly acting as a “translator” between those inventing the future and those running the show).
As often with people who get it badly wrong, he starts from the right premise: he quotes William Gibson “the future is already here, just not evenly distributed”; and then ignores it, going on about how our lives are now all on Facebook; we all expect people to be instantly available on the end of a phone 24/7 etc. etc. This is, of course, all true…for the tiny minority in power. But he’s missed the other (and sharper) edge of Gibson’s blade; the future, like the wealth to which it's so tightly linked, is getting more and more unevenly distributed: not only are there people half-way round the planet still living in the stone age, but there are people a few hundred yards away living half in the present and half in the 1970s. That’s a much more important split than Hammersley’s between people who grew up before and after the end of the cold war. Sentiments like “Facebook, Twitter, Google and all the rest are, in many ways the very definition of modern life in the democratic west” are just evidence of the echo-chamber mentality those three engender. And anyone who still believes the absolute “networks beat hierarchies” simply hasn’t paid attention since 9/11.
Scariest of all is that, tiny as the minority it represents is, this view really is reality, in the sense it's true for everyone with power, who have a huge impact on what happens to everyone else.
I’m not sure I really want our rulers to understand Hammersley’s future (delay the inevitable as long as possible!), though I also suspect many of them have a much better grasp of it than he gives them credit for.
Sun, 25 Sep 2011 13:32:32
New Programmers Wanted For Old Stuff
Although computer science seems to have lost the glamour it had in the ’80s, there still seems to be a steady stream of volunteers to work on all sorts of exciting free and open-source software projects (even though my alma mater is having trouble finding good applicants to read Computer Science; more on that story earlier, and also, I hope later).
But what about the less exciting stuff? The fundamental tools and applications that we programmers still use, directly or indirectly? I mean GNU coreutils, GNU autotools, not to mention pieces that we take even more for granted, such as the shell, the C library and the kernel. (In case this all sounds disguisedly Linux-centric and you’re wondering why I didn’t just say “bash, glibc and Linux”, that’s because while I work mostly on GNU/Linux, I’m mostly interested in portable programs.)
“But isn’t this all legacy stuff”, I hear you cry? If you never stray from the comforts of Eclipse, then maybe yes, but there are still plenty typing “ls” and “grep”. If you’re one such, and you contribute to free software, why not help out? It’s not all legacy code in maintenance mode, and we certainly need help. Rather like the MS-DOS team in the early ’90s, there’s a tiny core of maybe a few dozen major contributors maintaining much of the command-line software stack (outside the kernel and gcc). Unlike them, we are mostly not paid to do so; but we do have many opportunities for innovation and invention.
The UNIX command-line may seem like a dead backwater, of interest only to the dull writers of sclerotic standards, but that’s to misinterpret effect for cause. Yes, it’s mature, and hence capable of standardization, and that’s a good thing: even 10 years ago, many UNIX boxes lacked a decent POSIX implementation, whereas now almost all have one (or can get one by adding GNU). The ISO C99 standard added important features to the language. GNU autotools has matured from a somewhat cranky portability tool to a great leveller, making it easy to write code that will not build and run on any major OS (yes, including Windows), thanks not just to increasingly maturity and stability, but also to new projects such as the amazing gnulib, which papers over the cracks in a wide range of POSIX API implementations and provides useful data structures and other APIs missing from the standards, and autoconf-archive, which supplies autoconf macros for dozens of common configuration tasks and for hundreds of languages, tools and libraries.
Using these tools I was able to remove all platform-specific C from GNU Zile, a cut-down Emacs clone, cutting its code size by about 2,000 lines (20% of the code base), and slash the size of its configure.ac
(build system configuration file). All this while adding a test suite with nearly 100 tests, plus a few extra features.
And it’s not just Zile: stalwarts like GNU grep and coreutils have been made over, and, largely unnoticed by users, are looking much prettier under the hood (though there are important bug fixes, new features and performance improvements too). Even Emacs, with its immense code-base and ancient build system, is gradually being brought up to date.
The most exciting thing for me is the synergy: the more the tools are improved, the greater the leverage obtained when they are used, and the more they are used, the less effort is required to maintain the packages, and the work becomes easier: less time wrestling the system, more time improving it. And more fun: if you think C is hard, dull and slow work, think again. We too can have quick rebuilding thanks to ccache, easy bug-bashing with Valgrind, not to mention code completion and navigation either in an IDE like Anjuta or from the evergreen Emacs, which is finally integrating and polishing a decade of, until now largely invisible (and unusable), work on modern IDE tools.
Unfortunately at the moment this reduction in effort is being absorbed simply by enabling a tiny team to keep more packages up to scratch, but wouldn’t it be great if more people joined in?
Next time: the future of the past: it gets even more exciting!
26th April, 2011
Lots in translation
Recently I went to a recital by a couple of friends, and discovered that I’m not the only one who gets my recital programme translations from The Lied, Art Song and Choral Texts Page. Many translators are happy to let you use their translations in concert programmes, suitably acknowledged, and the original lyrics, where available, are reliable, usually checked against original sources, often include variants set by different composers (who often seem either too lazy to check an original copy, or think they can improve the text), and are out of copyright, so you don’t need permission to use them.
There are some mistakes in the translations; as I’ve said before for ebooks, if you find mistakes, please fix ’em: Emily Ezust, the maintainer of the site, is happy to receive corrections. Most are useful for study; many are not bad for programmes, though I prefer a little more poetry in a programme translation than in a strict gloss for the singer.
Back to the recital, and there was one oddity which even copy-and-paste couldn’t explain. The last song in the programme was Schubert’s Ave Maria (why, in a recital, sing it in Latin?), and the translation was the version of Hail Mary familiar to Anglophone catholics for some centuries. But it too had a copyright notice!
Proof sought
Christmas 2010 was when half the people I know got Kindles. I even got one myself, as a Christmas present for my girlfriend. Suddenly, there are millions more pairs of eyeballs on digital books, many thousands of which belong to acute brains adept at finding errors and misprints. And nearly all of them, I fear, are going to waste.
There are three reasons for this.
First, the Kindle does not encourage you to read free books. (There’s little point proofing commercial books, not so much because they tend to contain fewer errors as because I’ve never come across a non-academic publisher who cares; and anyway, if you’ve paid for a book, shouldn’t someone have proof-read it? I’d be interested to see publisher initiatives here, though; bug bounties, anyone?) This problem is easily fixed, though: many sites offer a huge range of books you can download to your Kindle via its built-in web browser, including the oldest, biggest, and best free online book repositories, Project Gutenberg. So go to, sample the tens of thousands (and rapidly increasing) treasure trove of public domain works, and never pay for a downloaded book again.
Secondly, no ebook or reading program I’ve yet seen has built-in functionality for noting errata. I use bookmarks in FBReaderJ; Kindle users can use notes. But even this primitive method is easy to use and I find it rarely interrupts the flow of reading, even for books containing hundreds of errors. So, note any errors you find!
Thirdly, online libraries often don’t make it obvious that they welcome reports of typos and errors (they do!), or make it easy to send them. (Project Gutenberg changed its email addresses last year, to reduce spam. This wasted my time last year when they introduced the “2010” suffix in April, and again this year when I had to go and check to see if they’d decided to update automatically every year. It seems not. Maybe in April? Really, Gutenberg, just use spam filters, it’s what they’re for.) Many other sites repackage texts from Gutenberg; some fail to update to the latest version, such as Feedbooks, whose books work better on my phone than Gutenberg’s own, but often contain errors already fixed on Gutenberg. (They told me they have to apply updates to their books by hand; I have offered them help with automatic updating, which isn’t rocket science, using tools that programmers use all the time, without success so far.) It’s clearly best to report errors to the original source of the text if at all possible, but if you can’t, don’t worry. Spend a couple of minutes finding how to report typos to wherever you got your book from, and try it; you’ll soon find out if they’re unappreciative.
We have an amazing resource here, and it’ll only get better (if it isn’t. Digital editions, unlike their paper forbears, need not go out of print: errors can be fixed forever. If every reader of a free ebook reports half a dozen errors, even dodgily scanned texts will soon shine. And this is for everybody: free ebooks can be printed and bound, so allowing imaginative publishers, libraries and donors to get them into the hands of those who can’t afford ordinary books, let alone a Kindle.
But aren’t lots of people doing this already? It doesn’t seem so: in 2010 Project Gutenberg started an automated errata tracker, which allocates each new errata report a different number. By the end of 2010 it was up to about 500; by contrast, in 2010 alone several different open source software projects racked up over 100,000 bug reports each [Stop Press: Gutenberg now seems to have abandoned automated erratum numbering.]. Despite its richness, Gutenberg has a handful of full-time employees, and runs on volunteer labour and donations (by definition, they can’t ask for money for their books). And that’s just the biggest Gutenberg project, in the US. To avoid exposing itself to the vagaries of international copyright law across different régimes, the various Gutenberg sites in different countries are entirely independent. Gutenberg Australia, which is the second-biggest original source for English books after Gutenberg US, is run by one person, the heroic Colin Choat, in his spare time.
So, please help!
Unchanging rhetoric on higher education
I tried to post this as a comment to the Guardian article Universities must cut private schools intake, says Simon Hughes, but the web site said “Your browser sent a request that this server could not understand.”
The tune of the government, sadly but unsurprisingly, never changes, and continues in its hypocritical vein, suggesting that ministers are not really interested in improving access to the élite universities most of them went to.
If they were, then we might hope ministers to tell us how things have changed over time (how is access to Oxbridge now compared to 30 years ago? much better than it was, but still some way to reflecting society), to laud successes, and to commission and act on research to improve things further.
And they might stop implying that what Oxbridge want to do is keep the plebs out and keep educating the rich.
I was briefly acting Director of Studies in Computer Science at one of the bigger Cambridge colleges in the late 90s. My successor, a state-school educated Northener, had to address the problem that applications were falling off (apparently in the 21st century, computers are no longer cool), and there were barely enough applicants for the places available, let alone good applicants. So, he went on the road, mostly to state schools whose students weren't applying to Cambridge. In jeans and T-shirt he'd talk to kids, encouraging them to apply, and to their teachers. Often, it was among the teachers he'd meet the most resistance: “Even if we did get our kids to apply to Cambridge, we wouldn't apply to you, you're from a posh college,” was one of the more bizarre comments he got.
Cambridge has had a university-wide programme to widen access which has been in place since I was an undergrad there nearly twenty years ago. My college has its own scheme too, and staff were encouraged to do the sort of thing my friend did. The University is desperate to get good students (even at the peak of the Computer Science boom, in the early 90s, the department was worried that the maths skills of applicants were weak), and it doesn't care where they come from.
The way the government is increasingly piling up-front costs on to students, the answer is going to be “from rich families and/or abroad”. The new funding system may be rational, it may even be fair, but it won't broaden access, and my friend will still be left wondering where on earth he is supposed to find the next generation of computer scientists.
Sat, 08 Jan 2011 19:29:32
Older entries
Last updated 2023/03/21