The Null Device
Posts matching tags 'computer science'
InformIT has an interview with Donald Knuth; he's skeptical about multicore processors, unit testing and reusable code, doesn't like the idea of eXtreme Programming™, and has more or less conceded that literate programming is unlikely to become mainstream any time soon, whilst still believing that it is a superior way to write code:
In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay—I’d give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they’re understood by a vast community of programmers, most people have no big incentive to change, just as I’m not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched).
Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn’t taken the whole world by storm. He observed that a small percentage of the world’s population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets.
With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.
Such as the following: 1) simulate how a crowd flees from a burning car toward a single evacuation point; 2) test out how a pathogen might be transmitted through a mobile pedestrian over a short period of time; 3) see how the existing urban grid facilitate or does not facilitate mass evacuation prior to a hurricane landfall or in the event of dirty bomb detonation; 4) design a mall which can compel customers to shop to the point of bankruptcy, to walk obliviously for miles and miles and miles, endlessly to the point of physical exhaustion and even death; 5) identify, if possible, the tell-tale signs of a peaceful crowd about to metamorphosize into a hellish mob; 6) determine how various urban typologies, such as plazas, parks, major arterial streets and banlieues, can be reconfigured in situ into a neutralizing force when crowds do become riotous; and 7) conversely, figure out how one could, through spatial manipulation, inflame a crowd, even a very small one, to set in motion a series of events that culminates into a full scale Revolution or just your average everyday Southeast Asian coup d'état -- regime change through landscape architecture.
Or you quadruple the population of Chicago. How about 200 million? And into its historic Emerald Necklace system of parks, you drop an al-Qaeda sleeper cell, a pedophile, an Ebola patient, an illegal migrant worker, a swarm of zombies, and Paris Hilton. Then grab a cold one, sit back and watch the landscape descend into chaos. It'll be better than any megablockbuster movie you'll see this summer.And here are emotional maps of various urban areas, including parts of London and San Francisco, created by having volunteers walk around them with GPS units and galvanic skin response meters.
(via schneier, mind hacks)
This is pretty impressive; a new algorithm that, when presented with a photograph with a hole cut out of it, searches a database of millions of other photographs, presents the user with a menu of similar-looking images to select from, and then composites elements of the chosen image to fill the hole seamlessly, producing an image which (in most cases) looks semantically coherent. Most impressively, it is entirely data-driven, and does not require any human-generated annotations of test data:
It uses mathematical properties of the images to make the match, and sometimes ends up serendipitously picking other images from the same location (because two photographs of, say, the Taj Mahal taken on a sunny mid-afternoon are likely to share similar properties).
Of course, it is possible to use such a tool creatively, replacing unwanted parts of an image with elements from a completely different scene, as the paper (PDF here shows:
Someone has written a program for generating random computer-science papers, designed to scam dubious conferences, apparently with some success:
One useful purpose for such a program is to auto-generate submissions to "fake" conferences; that is, conferences with no quality standards, which exist only to make money. A prime example, which you may recognize from spam in your inbox, is SCI/IIIS and its dozens of co-located conferences (for example, check out the gibberish on the WMSCI 2005 website). Using SCIgen to generate submissions for conferences like this gives us pleasure to no end. In fact, one of our papers was accepted to SCI 2005!
The authors intend to attend the conference in question and deliver a randomly-generated talk.
A sample of its output (without the authentic-looking graphs), excerpted from a paper titled "Refining DNS and Suffix Trees with OWLER":
We have taken great pains to describe out evaluation setup; now, the payoff, is to discuss our results. We ran four novel experiments: (1) we deployed 86 Atari 2600s across the underwater network, and tested our checksums accordingly; (2) we ran 34 trials with a simulated instant messenger workload, and compared results to our hardware deployment; (3) we measured flash-memory space as a function of ROM speed on a Motorola bag telephone; and (4) we asked (and answered) what would happen if mutually replicated vacuum tubes were used instead of I/O automata. All of these experiments completed without LAN congestion or 10-node congestion.
I take my hat off to them. When I wrote the Postmodernism Generator, all those years ago, I was sceptical of the possibility of successfully generating convincing random text in a more objectively verifiable field, such as computer science. I guess that, if those responsible for reviewing the paper aren't bothered to actually read it and attempt to assemble a mental model of what it states, one can get away with anything.
Formulaic music isn't just for the teeny-boppers and pissed-off teenagers. Computer scientist and songwriter Loren Jan Wilson develops a system to analyse Pitchfork music reviews, finding which words have the most positive connotations, and then using that to write two songs, scientifically designed to appeal to the coolsies who write for Pitchfork.
There are positive values for "rough" and "primitive," and negative values for the words "shiny" and "polished." This points towards a preference for lo-fi recordings, which are usually associated with lower-budget independent music. This falls in line with the Pitchfork reviewers' dislike of capitalism, which I talk about a bit in the other interesting results section below.
The "sadness" group is by far the highest-scoring mood, beating the next mood ("dark") by over 1100 points. As a response to that, I've tried to make these songs as sad as possible.
The songs, Kissing God and I'm Already Dead are provided with MP3 form, along with detailed descriptions of how the analysis guided his creative decisions. The songs, as you'd expect, combine gloomy lyrics, lo-fi guitars, choppy beats and layers of effects.
It'd be interesting if he had gotten Pitchfork to review these songs before revealing their origin, if only to see whether he'd have been critically lauded as the next Radiohead or whatever.
Salon asks whether "geek chic" will kill off innovation; the thesis is that now that "nerds" are no longer persecuted and ostracised, they won't have impetus (or time, between all the parties and dates in their social PalmPilots) to invent, create or otherwise contribute to society. Or, to put it in other words, that innovation required two components: individuals with technical intelligence or other skills (these would include artists and musicians), and the ostracism/persecution of said individuals. Which is an interesting theory. (via TechDirt)
(If one wants to get Freudian, one could argue that said individuals' lack of a sex life resulted in them sublimating their libidos into creative enterprises. If that holds true then, given the rise of "nerverts", Heinleinian polyamorists, netsex, webcams and the like, we're, well, fucked. Though hasn't polymorphous perversity been a feature of the fringes of society since the 1960s at least, if not the days of the Hellfire Club?)
Another criticism of the theory is that the "nerd" stereotype doesn't hold for most IT people, and hasn't done so for much of the 1990s. From what I remember, many of the people who did computer science when I went to university were well-rounded individuals, with social lives, girlfriends (they were predominantly male; computer science is almost a monastic environment, but that's another post) and non-computer interests. Many played sports in their spare time; and many were quite good programmers. Whether these people fall into the "nerd" category is debatable.
But yes; if innovation depends on talented outsiders, the "nerd" bar will just be raised higher, and there always will be some who don't want to go to the numerous parties they keep getting invited to but would rather sequester themselves and follow some intellectual passion. And if that fails, there are always autistic savants.
A few bits lifted from Techdirt. Firstly, secretive Stalinist cult-state North Korea has staked its claim to the Internet Age. The rigidly centralised, computer-poor nation claims to have invented the computer drink. Ah, good; we needed one of those.
But what it lacks in utility, it makes up for in entertainment value. The Ectaco Personal Translator proved the perfect icebreaker during a dinner party in rural France. It turned "thank you for the great dinner" into "it was disgusting," and "you are very beautiful" into "how much?" What better way to break the ice with a roomful of total strangers in a foreign country whose language you don't know?
Computer scientists at MIT prove that Tetris is NP-hard; i.e., optimally stacking blocks is in the same class of problems as things like the Travelling Salesman problem, meaning that there is no known way to solve them efficiently. Maybe this means that we'll soon see Tetris-based cryptographic algorithms?