While not a basic introduction, this article is a valuable chronicle of some hands-on learnings from using kubernetes. The hand-drawn illustrations are a great addition.
I am happy to see the release today of Google Notebook. While the chief grumble appears to be that it doesn’t allow for tagging, I am okay with that (it will no doubt be added soon enough).
What Google Notebook does provide is a means to nab portions of web pages and add them to an online ‘notebook’. Notebooks can have folding sections in them and the text that is nabbed can be edited using a basic function online editor. The editor allow you to modify fonts, color and hyperlinks and a few other niceties. Sections of a notebook can be moved from one notebook to another by simply dragging and dropping. Snippets can also be rearranged within a notebook via drag and drop as well.
Google Notebook reminds me a lot of another application that I use on my Mac called StickyBrain. I use StickyBrain to grab text and graphics from various applications that I can then file away, organize and search for future reference. I find it very handy for when I want to just grab a snippet of something from a web page rather than bookmarking the entire page. Because StickyBrain is a Mac app, I also found myself wishing that I could do the same on the Windows based laptop I use for work. Until now, this really wasn’t possible. I could get part of the way there by using LookLater to have visibility to bookmarks between work and home systems (keep in mind, this was before del.icio.us provided private bookmarks) .
Google Notebook now lets me just nab the bits of pages that I want and share them StickyBrain-like between home and work. Frankly, I was relieved to find that the provided Firefox plugin works identically under both Windows and Mac OS X.
Now if only chronosnet would come out with a way to synch my Google Notebooks with StickyBrain this could be a very powerful solution (they already have an excellent Palm synch conduit and a .Mac synch so this is not new territory for them).
Currently, Google Notebook doesn’t appear to integrate much with the other Google offerings except for search. To me, the big opportunities her are integration with gmail and Google Maps. I would guess that other things like tags and the ability to subscribe to shared notebooks online will come in due time.
Overall, I like what I see with Google Notebook and am curious to see how it will evolve over time.
I find many multidisciplinary books to be especially fascinating, and Ambient Findability by Peter Morville was no exception. Morville posits that in the age of the search engine, one of the first considerations for data or a service is how easily it can be found and not necessarily how easy it is to use (though the usability bit certainly comes into play after it has been found).
…design of a useful information system requires a deep understanding of users and their social context. We cannot assume people will want our information, even if we know they need our information. Behind most failed web sites, intranets, and interactive products lie misguided models of users and their infomration-seeking behavior. Users are complex. Users are social. And so is information.
Of course, there is a great deal of information out there to be found and not all of it is terribly relevant. It should also be obvious that the sheer amount of data ‘out there’ is growing. In particular, the amount of telemetry information from GPS, RFID tags and sensors of all sorts is adding more unstructured or semi-structured data to the mix. Morville covers this ground deftly while touching on a very approachable discussion of the nature of information, how wayfinding in the physical world might provide clues to wayfinding in the data-space, and push versus pull models of data acquisition and filtering.
Chapter 6 introduces semantic web technology and approaches and its goals in dealing with data and metadata in a structured, machinable way. These approaches are contrasted with the folksonomy approach taken by the roll-your-own-tags way of doing things such as del.icio.us .
It would seem that unless you take charge of your own findability, someone else will do it for you and not necessarily to your benefit. Typo-squatters were the unfortunate pioneers of this space. More subtle variations on this are competitors who create site to disparage your product and play up their own under the guise of some ‘neutral’ third party opinion.
What you don’t find can be just as bad. I identified a malady back in the early 90s in the context of the then emergent graphic user interfaces that mirrors the contemporary ‘Google mindset’. At the time, everyone was extolling the virtues of What-You-See-Is-What-You-Get. The problem that I saw was that the vast majority of people will assume that What-You-See-Is-All-You-Get and not venture beyond the eye candy that is most immediately in front of their noses to discover richer functionality and interactions. There are folks who truly believe that ‘if it is not in Google, then it doesn’t exist’ and that what they find in Google is the truth. This can be extremely dangerous, as it has been demonstrated over and over how easy it is to manipulate search results (not only on Google but Digg, Technorati and others).
Another issue with current search technology is that it simply isn’t fuzzy enough. For example, when I travel, I seek out CD shops to browse the racks and find new music. Part of finding new music is also simply having an ear out for what is playing in the shop itself. I have found quite a bit of new music that I would never have known to search for but was introduced to by a chance listen or stumbling across it in a section of the shop that I was looking for an unrelated item in. I hold out a great deal of hope that application of semantic web technologies might help to search for and discover items in a manner similar to these physical, chance finds.
While reading through yet another article on SOAP vs REST, I came across a quote from Tim O’Reilly that confirms something that I always suspected about SOAP:
I think there are also some political aspects. Early in the web services discussion, I remember talking with Andres Layman, one of the SOAP architects at Microsoft. He let slip that it was actually a Microsoft objective to make the standard sufficiently complex that only the tools would read and write this stuff, and not humans. So that was a strategy tax that was imposed by the big companies on some of this technology, where they made it more complicated than it needed to be so they could sell tools. [Emphasis added]
Everything I have seen about SOAP has let me to this conclusion. The funny thing is that many corporations cling to SOAP as if they couldn’t possibly have web services without it (though many crafted and successfully implemented their own simple XML over HTTP services before the SOAP spec saw the light of day).
I think that Tim missed the boat with this comment as well:
It’s not necessarily just Machiavellian scheming. I think Microsoft really believes that you can create better user experiences with tools that give people so much more power.
Not quite. It took vendors like Borland (who has now left the compiler/IDE business) and others creating much more robust and productive environments in the 80s for MS to finally wake up to the need to have a viable IDE. In typical monopolistic fashion, MS latched on to the IDE as yet another means to vendor lock in. So, once it inserted itself into the IDE business, MS ‘strategy’ has always been creating the most obscure, convoluted means to implement code, libraries, frameworks, etc to tie developers to their toolset, plain and simple. If the languages and frameworks were able to stand on their own, there would be no lock-in to the MS tools. Incidentally, you will hear similar arguments around JavaServer Faces and Sun tools as well.
Tim Bray has a brief posting with what is likely to turn out to be an incendiary issue: if you come from a Java background, you will write better librarys (and code) in other languages than if you didn’t have a Java background. I can see this particularly outraging much of the Ruby crowd, who seemingly live to smugly disparage anything that is not Ruby.
Oddly enough, this version was released as beta yesterday, but it’s good to go today.
I did find one significant bug — if you don’t have any photos selected and you click on File->Export, iPhoto will hang (
apparently due to flickrExport hanging). If you let it sit for a minute or two it will select your entire photo library to export — probably not what you intended. I am confident that this will be quickly and proficiently fixed. I stand corrected — this is a ‘feature‘ of iPhoto: if you have no photos selected when you Export, it assumes that you want all of them exported.