Tim Bray has a brief posting with what is likely to turn out to be an incendiary issue: if you come from a Java background, you will write better librarys (and code) in other languages than if you didn’t have a Java background. I can see this particularly outraging much of the Ruby crowd, who seemingly live to smugly disparage anything that is not Ruby.
…Once the pleasantries were out of the way, he started the first lecture, which was about the composition of the atmosphere. Everyone started taking copious notes. He told us that Nitrogen was 78% of the air we breath, with Oxygen accounting for 21% and the remainder taken up by Argon, Carbon Dioxide, and other gases.
He then proceeded to explain that Nitrogen had a pink color and a slightly sweet smell. Like good students, we continued to record this valuable information into our study notes. After several more minutes of lecture he stopped, and then exclaimed “are you students morons??!!”. Needless to say, this caught our attention and we instantly brought our heads out of our books.
He continued: “If Nitrogen was pink and formed 78% of the air, the classroom would look pink! Are your brains even turned on right now?!” He proceeded to berate us for being so gullible, and then used the situation to segue into a discussion of the ingredients of science; observation, theory, and rigorous testing.
As always, a compelling and insightful commentary by IBM’s Irving Wladawsky-Berger (via AlwaysOn); this time discussing the effect that technology standards are having (or will have) on IT Strategy. One of the points that he makes is that standards aren’t just about software leverage; hardware and web services standards are going to allow enterprises to grow and share in ways they couldn’t easily before.
Now, what we have seen is the continuing emergence of standards as we keep going up the stack. In this world of grid computing, what you’re really trying to do is share all kinds of IT resources—computing capacity, storage, files, applications, and so on—all built around the common standards that everybody uses. So you can essentially begin to virtualize the system so that people can access your resource without having to know precisely where that resource is. A very difficult example that must have been used in let’s say supercomputing systems is that you can form a grid out of multiple supercomputers in a location or in a country, and when somebody submits a job they submit it to the grid. And then the systems themselves get their act together, find where they have capacity, and make sure they can access everything, but you’re essentially sharing all the extreme capacity that wasn’t there before.
I found this posting by Danny Weitzner on Privacy, practical obscurity and the power of the Semantic Web to be very thought provoking. It does an excellent job of summarizing the legal concept of ‘practical obscurity’ and provides some food for thought on what this might mean as more and more information can be joined together and made available through Semantic Web activities.
It’s that time of year when the clocks leap forward by one hour in North America. I find it amazing that so few companies make it easy for consumer goods to deal with this twice yearly event. Why don’t alarm clocks, microwaves, etc just deal with the time change? No, Really.
The irony is that I can go to Target and buy a $12USD digital watch that can automatically adjust to seasonal time changes, but the sorry clock in my $XX,000 car can’t make the same adjustment? Seems odd to me; I can’t believe that more products can’t incorporate that two cent component and a simple way to say what timezone you are in to free consumers from having to manually make these changes.
I’ve been thinking about all of the places that ‘allow’ customers to do the data entry tasks for them with little in the way of reward back to the customer. Think about it, you get to key in all the information for your airline reservation, but what do you get in return (ok, maybe a discount, but hear me out)? What I would like to see happen is that more online companies provide value added information in return.
For example, when I make an airline reservation, why can’t the airline shoot me an iCalendar with all of the departure/arrival information that I can drop into my calendar? If I order some merchandise from an online vendor like Apple that requires a signature on arrival, why not provide me (again) with an iCalendar that I can easily add to my calendar so I can make sure someone is available to sign for the delivery? You would think that the delivery companies (UPS/DHL/FedEx) would be all over this as it saves them the time/effort/fuel associated with re-delivery. For that matter, why not give me an Atom/RSS feed that allows me to easily track the package. Once the package is delivered, they can trash the feed URL. Actually, the same would be cool for the airline example as well.
This isn’t such a leap — many banks allow you to get your transaction information in a format (QIF) that you can easily import into Quicken; why not for the more mundane stuff as well?
But the thing that would really make this work, is to craft the value added data so that it would work with mobile devices. That way I don’t need to be tied to a feedreader or calendar that is on my desktop computer, I can be anywhere. This is obviously important for the air travel scenario. Perhaps part of the problem gets solved by having a feed reader that can send SMS messages based on certain feeds changing (like my flight schedule). You can sort of make that work now with Yahoo alerts, but a more integrated solution would be preferable.
One last thought: perhaps an interim method of bridging the data gap is to provide the scheduling information in a microformat like hCalendar and embed it in the confirmation/receipt screen (HTML) that is typically provided by a web site. It could then be mined out with PiggyBank or some other GRDDL-like scraper. Not perfect, but at least avoids the re-keying that is required now.
There has been a lot of discussion about some new features that have been added to Rojo, an online feed reader. One feature getting the most buzz is the ‘new’ relevance feature.
I don’t get it. For one, I have been a Rojo user since the shuttering of the excellent Searchfox, and I have had a ‘by relevance’ option for months. It was never clear exactly how ‘relevance’ was being determined, if at all. In response to a comment that I posted on techcrunch someone pointed me to this write up on readwriteweb that includes the following:
First, the purpose of “relevance” is to do for feed reading what smart search engines (like Google) do for search results — figure out what to put on the front page. Many readers are overwhelmed by the number of new stories coming from their feeds every day. So Rojo Relevance is about sorting those by “relevance” rather than date, to put the good stuff on top.
So basically, as Rojo defines it, relevance is absolutely useless to me and should more accurately be called ‘popularity’. I want it to be relevant to what I am interested in, not the beauty contest/what-everybody-else-is-reading dogpile. That represents value to me.
I’ll say it again: that was the beauty of Searchfox: it paid attention to what I read and ordered my river of news according to that, so that everytime I sat down to read, I was greeted with what I wanted to read first. Searchfox also was smart enough to mark a page of links as read as I advanced pages; this way, if I was interrupted in reading, I could simply hit refresh and get any new postings as well as the unread ones from my previous session. Apparently this is a very difficult concept for Rojo and others to understand and implement.
Can someone please implement a real relevance ranking in a feed reader and not another implementation of digg?
Computerworld is running a article about how some researchers are warning of the perils of RFID viruses. In a word, bullshit. An RFID tag is simply a data source and, typically, very little data at that. If you have a poorly written application that does not do the appropriate data validations you can create a problem for yourself regardless of the source of the data.
This is worth repeating: the issues identified in the article included SQL Injection, buffer overflow(!) and other classic examples of exploits that can be applied to any poorly written application. The fact that an RFID tag was the data source does not make it a specific RFID issue — it’s just data.
So if someone creates a ‘virus’ and puts it on an RFID tag, great. You can write a virus and embed it in a JPEG image or mp3 file. However, unless you have an application that specifically looks for the virus payload and provides an execution environment there is absolutely no way that it can do any harm to your computer system unless it can actually execute the virus code.
One last point about data security. Hopefully, the industry has learned (via rampant indentity theft) that putting personally indentifying information in an RFID tag is a very, very stupid thing to do (and may actually be illegal in some locations). It is likely that the RFID tag would only have some sequential number/identifier that is read and tied back to some other more meaningful definition/data elsewhere. So if a bad guy gets the RFID identifier, they still have to know about the other system, locate it, compromise it and make sense of the data relationship. Could it happen? Sure. But is it as simple as the press makes it out to be, probably not. Bonus points to the implementers smart enough to encrypt the RFID data in the first place.
Here is a rather contrived scenario from the article:
For example, airports are considering using RFID tags to track baggage. But Tanenbaum warned that this application could pose a large problem if an RFID tag is read and delivers a much larger set of data in return. A false tag on a piece of baggage could exploit a buffer overflow to deliver a virus to the RFID middleware. Once the virus code is on the server, it could infect the databases and corrupt subsequent tags or install back doors — small programs that allow for the extrication of data over the Internet, Tanenbaum said.
Right. Or you could have the appropriate data validations in you processing system and not worry about RFID data or data from any other source for that matter.
Microsoft has ‘innovated‘ again by introducing the Apple Newton 13 years after Apple originally did. From my previous post you might derive that I was a big fan of the Newton and was a bit surprised by the complaints about the form factor.
Yet here we are in 2006 and MS has produced an even larger, heavier unit that even the largest Apple Newton. Have a look at this MS article that describes how the Newton was the wrong form factor, it should have been smaller like a Palm Pilot or larger, like a tablet:
The Newton hardware could have been developed along a number of lines, making it either larger or smaller. Instead, it was left in that dead zone of being too big and heavy for a pocket and too small to have a large, visible screen.
So which part of the ‘dead zone’ does that leave their new product in?
CNet has an article on how RSA is looking to turn PDAs, cell phones and other common gadgets into security tokens than can be used for strong authentication.
Instructables is an interesting web site where you can show people things that you made and share how others can make them for themselves. Definately some creative and handy people posting on this site.
We like to think about the physical world as something that is programmable. We like to think of objects or stuff you make as ‘code’. In other words, we are approaching the physical world as something that is describable and replicable.
I find the OED a great resource to have around. We have the Compact OED version of it that has 9 pages reduced onto a single page (versus the pricy 20 volume edition). To read the entries you need to use this sort of half crystal ball magnifier that is provided along with the Compact OED. It’s a little funky, but thats part of the fun.
If you are curious as to what went into producing the OED, I highly recommend The Professor and the Madman by Simon Winchester.
I think that using technology in humorous ways is a noble task, especially when it has the effect of humanizing things. At least thats the way that a posting over at BoingBoing struck me — its about a “watch that displays cheeky ‘approximate time’ messages“.
I was a teen back when LED-based digital watches first became affordable and thus, widely available. I always had to chuckle whenever anyone with one of these watches was asked the time; they would quite earnestly respond “10:42” or “3:28” not “quarter to eleven” or “three thirty” — no siree, they knew exactly what time it was, because that’s what their watch displayed.
Of course the matter of the exact time depended on what source they used to set their watch by and the not insignificant matter of how well the watch actually kept time. Of course you can now buy a watch that synchornizes itself with Naval Observatory time, but I still like to keep ‘human’ time — “half past eight” instead of “8:29” for me.
And for what it’s worth, my timepiece of choice is an analog watch with a manual movement (no batteries).
I received an email from the creator of SearchFox a day or so ago stating that SearchFox would be ceasing operations as of January 25, 2006:
I am very sorry to hear this as I had been using SearchFox as my sole newsreader for several months now. The user interface was clean and easy to use, but the feature that I liked the best was how it would notice what I read and prioritize subsequent feeds so that (more often than not) what I wanted to read was at the top of my river of news.
I guess now I need to find the second best online news reader available. Google’s offering is just horrible, Bloglines (which I started with) is still pretty feature poor and Rojo really needs to do some work on usability. Interestingly, many of the features being requested by users of Rojo would, if implemented, make it on par with SearchFox. Any suggestions on online feedreaders that you are happy with?
I read several announcements today about the new Yahoo Go service. After searching endlessly for an actual URL that linked to the offering, I finally found one. I’m sure I won’t be the first or last to say that the offering should have been called ‘Yahoo No Go’. The computer portion of it is not currently available and will be PC only, the ‘TV’ portion of it is also a PC app (and also not available). Ah, the mobile portion is available for Nokia series 60 phones — I’m in luck, I have a Nokia 6620 that fits the bill.
I download the ~1.7MB app and install in on my 6620 with great interest after having seen the screen shots and read the hyperbole from the CES announcement. The ygo.sis file expands to take nearly 4MB of phone and memory card space, then goes on to download another app for connections. After the initialization, I try out the apps. One by one, I grew less and less impressed. Yahoo Go, it turns out, is just 4MB of bloat that does nothing more than start the WAP browser on the phone (which wants you to login to Yahoo again!) to display the services that are already available to you through the Yahoo mobile site. Accessing the Yahoo mobile site directly using the Opera series 60 browser provides a much better user experience that this. What a total crock. Hell, Cingular’s J2ME IM application kicks the crap out of this thing (even as a single tasker). Google’s J2ME Maps implementation makes Yahoo’s effort look like the brown stuff in the bottom of a college dorm fridge.
I then went about removing this craplet from my phone, but, guess, what? As part of the removal process, it wants to ‘phone home’ to Yahoo and waste more bandwidth before it will remove itself (shades of Microsoft). It’s no wonder that Yahoo recommends that you allow the app to connect whenever it wants to (versus letting you know that it want to make a connection). I finally had to go into the Nokia AppManager and delete the various pieces of it manually.
Based on this experience, Yawnhoo needs to try harder and put out something useful (other than hype).
The BBC have an interesting article titled “Do you get less wet if you run in the rain?“. The article delves into the ‘serious’ mathematics and physics to consider in answering the question. I won’t spoil the conclusion — read the article.
However, I did appreciate one of the post comments which reminded me of my Intelligence and Effectiveness rant from the other day:
Alternatively, ignore the maths and get an umbrella.
I recently finished reading a book that got me thinking about the relationship between science and religion. The book in question is The Universe In A Single Atom. A reading of this book, coupled with current events should give anyone pause for thought.
First, it was refreshing to read that in Tibetan Buddhism, the Dalai Lama encourages all monks (and lay people) to have a knowledge of science. It is in no way perceived as a threat to religion. In fact, both are seen as a way of understanding the nature of reality by means of critical investigation. This is not surprising, as one of the fundamental tenants of Buddhism is to ‘believe only what you have found to be true yourself’ — religious dogma is not forced on the follower. Throughout the book, the position held is that the benefit of science is that it can work to ease suffering at a physical level. Religion can strive to do the same for mental suffering. Contrast this with the veritable war on science that is coming from the right in this country and you have to wonder about the nominal aims of such a course of action. For religious fundamentalist, there is no place for science (or any opposing view point for that matter), only blind faith.
One longish quote from the book expands on a more enlightened view of the interplay of science and religion (in this case Buddhism):
In one sense the methods of science and Buddhism are different: scientific investigation proceeds by experiment, using instruments that analyze external phenomena, whereas contemplative investigation proceeds by development of refined attention, which is then used in the introspective examination of inner experience. But both share a strong empirical basis: if science shows something to exist or to be non-existent (which is not the same as not finding it), then we must acknowledge that as a fact. If a hypothesis is tested and found to be true, we must accept it. Likewise, Buddhism must accept the facts — whether found by science or found by contemplative insights. If, when we investigate something, we find that there is reason and proof for it, we must acknowledge that as reality — even if it is in contradiction with a literal scriptural explanation that has held sway for many centuries or with deeply held opinion or view. So one fundamental attitude shared by Buddhism and science is the commitment to keep searching for reality by empirical means and to be willing to discard accepted or long-held positions if our search finds that the truth is different.” (emphasis mine)
The book develops with the Dalai Lama exploring various scientific topics such as quantum physics and neuroscience and how they compare and contrast with Buddhist scripture and cosmology. He even states openly, where, in light of current science, some of the cosmological explanations for the origin of life seem quaint at best and in many cases are regarded as a point in time view of the world rather than an enduring truth. Other areas examined at the intersection of religion and science are human values and ethics.
Overall, a very thought provoking read.
Seeing this article about Google and their recruiting and hiring practices reminds me of an observation that has been backed up by experience over the years. That observation is that intelligence does not lead to effectiveness. In fact, it can lead to some of the more misguided actions and conclusions that I have ever seen.
The seed of this observation came when I was working at a small consulting firm that had an external consultant as it’s chief technology person. This guy was incredibly bright, but had created a system for merging and reporting on data using flat files that was very nearly unmaintainable. Since the task of maintaining this train wreck was passed onto me when I was hired, I immediately started looking at ways to make the process more automated. It seemed a natural fit for importing the data into a database and driving the transformations and reporting from there. Here is where the fun started, I mentioned my plan to bright-guy and he launches into this red faced tirade about how it will never scale and database b-tree algorithms were too inefficient, he knew because he had written his own b-tree algorithms, etc. This rant went on for at least ten minutes. While he was ranting away (and in between mildly disagreeing with him) I was typing away in r:base (of all things) and knocked together a prototype that basically proved, well, he was demonstrably, dead wrong. He poked and prodded on the prototype for about a half hour, then finally conceded that ‘things had changed with data algorithms in the last few years’. Indeed.
Intelligence is great, but it needs to be pragmatic and relevant. When I discuss this with others, I usually take it to a non-technical domain, just to remove any of the techno-zealotry that surrounds most of software and technology. For the pragmatic developer, if you ask them to make mashed potatoes, they will either know how to do it or consult a cookbook for some reasonable guidance on how to achieve the desired outcome. The bright-guy, will likely produce some studies around the required crushing force of a potato, white papers on starch combinations, argue that beans are more appropriate for mashing and possibly even conclude that mashing potatoes is not possible. I’ll take the pragmatic guy and his results any day. He will achieve a quality result in the amount time the bright guy spends on arrogant tirades and irrelevant research whose aim is to refute your request rather than respond to it.
Another great example, is then I was hired into a financial services company that up until my arrival were utilizing high priced consultants from Microsoft. Nine months before, there were given a somewhat simple task — to take a set of financial formulas and create a DLL that would allow a suite of financial planning tools to use a common implementation of these formulas. Three guys (1 PhD and 2 MS degreed bright guys) worked on this for nine months. Nine months. What they came back with was the proclamation that it was impossible to implement all of the calculations in a single function call. Not only was this not what they were asked to do, how they could have ever reasonably conceived that implementing calculations in that manner was a good idea was beyond me. When I gently pointed out the problem with their approach, the response was an unapologetic ‘you think you can do a better job, fine…but just look at our stock price!’ No mashed potatoes for that company. The good news was that after a few more months of my mopping up their crap and making much more progress than they had in nearly a year, they were (finally) shown the door. They were even ethically challenged enough to try to recruit me before they left. No thanks.
On another occasion, I was the tech lead at a consumer electronics manufacturer. I successfully migrated them from a load of mini computers to a client server based environment, got them connected to the Internet and built a web site for them back in the days when Mosaic was THE browser and Mozilla (soon to be the Netscape browser) was a rumor being discussed on Usenet. I digress. When I left, the CEO of the company proclaimed that he was going to get a real bright guy ‘to take them to the next level’. So he went to Carnegie-Mellon and hired a MSc grad. What an unmitigated disaster. I kept hearing from people who where still there that this guy would do things like reboot production servers, power off network hubs and other clueless activity. After less than six months, they let this bright guy go (apparently with a lavish severance package to keep him quiet and avoid embarrassing the company and the CEO any further).
I have also had occasion over the years to interview Computer Science PhD grads for various tech positions in business. Inevitably, I’ll ask what their thesis topic was and then ask how that it could be applied in business or industry. It is frightening the number of blank stares that simple question receives. It is always a bit telling when the answer is that it has no application in business. It is even more telling when I suggest a few applications that make them sit up in their chair and an sheepishly admit that they never thought of applying their work in that manner.
Lack of relevance is also the problem with asking cute little problem solving questions in what are supposed to be technical interviews. This seems to be a cliquish thing that really has no bearing on software engineering-type problem solving. I don’t solve word puzzles for fun, because frankly, at the end there is no result (the term ‘mental masturbation’ comes to mind here). However, solving a technical or business problem is enormously satisfying because in the end there is a tangible, relevant achievement.
Something that I have noticed working in IT for the last 20 or so years is that the critical thinking and problem solving skills appear to have diminished with the most recent crop of CS grads. Perhaps this is an unfair characterization, maybe there is less emphasis on ‘the fundamentals’ in the current CS curriculum. As an example, if you give a set of requirements to a recent grad, their first instinct is to go try to download something from the Internet and start customizing it. While I applaud the apparent desire to re-use existing code bases, Im not sure that this practice makes them prudent or dependent on the work of others for productivity. A similar issue exists when a problem or issue comes up — zoom, straight to Google. If someone else hasn’t solved the problem, well, then it must not be workable — better download someone elses work and ‘fix’ the problem that way. The other symptom is that inevitably, when asked to do something new, the newbies will insist on having a training class on it. Suggest picking up a book and building a prototype to learn and you will be rewarded with a grimace as if you had asked them to drink their own vomit.
I contrast this with my education and career where I felt that I had a good grounding in fundamentals (how operating systems, programming languages, databases, networks and other building blocks work). Having these in place, I have successfully been able to apply and extend by knowledge base without being dependent on someone else providing me with an answer. I remember one of my first jobs, I showed up and was told, ‘we just bought this minicomputer and we need you to set it up’. Ok, hadn’t done that before, but love a good challenge. I took the manuals home, came in the next day, setup the mini, did development on it and it was solid for years. This has been repeated over the years as, at various times, I was a database administrator, network engineer, computer operations manager, software developer, development manager and so on. In each of these roles, building on the fundamentals and previous experience seemed natural to me — I wasn’t dependent on someone else solving the problem for me.
There also seems to be the glamor effect at play here as well. Because there is so much access to so much ‘stuff’ via the Internet, new developers feel the need (if not the RIGHT) to only use the tools and practices that are cool at the moment. The whinging that accompanies constraining the technology set can be deafening. In retrospect, somehow I have managed to use the tools at hand to solve the problem at hand without needing to do it with the ‘tool of the moment’ or wasting time on ‘if only…’ . Oh well, this presents the opportunity to mentor and lead by example. Maybe an old dog can teach some (not so) new tricks.
This holiday season, I created my own little mashup by combining this web service (created by Ben Hammersley) to track a FedEx package via RSS and the Yahoo Alerts service to notify my mobile phone when the FedEx status was updated.
The combination worked pretty well (Yahoo Alerts sent several false/duplicate updates). It would be nice if all of the major shipping carriers provided this as a service. Tracking multiple packages via RSS is much simpler via RSS than having to go ping individual websites. The option to couple this with SMS notification is a big plus for critical, can’t miss shipments.