"On the other hand, applications like airline reservations and buying concert tickets will probably always need structured data, since they rely upon the ability to accurately manage shared access to a single data item instance (i.e. the airplane or concert hall seat) so that it is sold only once and on a first come first served basis."
"Applications like purchase order approval, medical records distribution, expense report filing, engineering and repair drawing distribution, safety inspections, and so on, really do not derive any benefit from the significant effort required to structure their data."
The information in the latter list of apps is primarily consumed by end users. Even if these apps have some structure, it is primarily to aid local manipulation (such as totaling, sorting) and not something that requires a full-fledged DB.
And though no one does it today, I believe wiki's can, in-theory, become front ends for apps such as these; much like tcl can be used as scripting front end for the apps.
One of the ways is to let browser track changes. You create a topic listing interested topics. The URLs generated for these topics should become different when topic changes - so browser will treat them as 'yet-to-visit' URLs. For this, wiki product authors should change the algorithm that generates URLs that includes its latest version.
But there is a problem: The URLs don't look clean (i.e. version information is otherwise useless. In fact, if it is stored elsewhere, you might end up seeing older version of topic, not the latest.
So a better approach is to simply create a macro that will generate URL with version. Or some syntactic notation such as Topic:latest. (Latest to be replaced by its version no.).
Any other suggestions?
At Persistent, we have developed a sharepoint webpart to create and maintain a project space. This tool helps pull emails from exchange or IMAP, extract documents and be able to add notes to these documents etc. If anyone is interested, we would be happy to provide the the beta versions.
Within wiki environment, you edit the topic in browser's text box to create a knowledge base topic. You can also upload documents (one by one) and put references to them, cut-paste from emails, reference other topics and URLs.
Good thing is that you can indeed publish quite a good web page with lists and tables. Bad thing is, it is not efficient. The ideal mechanism is to do drag-and drop into publishable areas. As of today, we can drag-and-drop URLs, and perhaps use the new WebDAV enabling plug-ins to manage the documents using explorer. But we need to be able to do it for multiple data sources. And the "drop" part should directly go to containers such as tables and lists.
This being a key differentiating and useful feature, sharepoint has definitely scored a point here.
Back to 1995 yahoo vs. Alta Vista, Alta won't work because:
* USers don't know how to formulate queries
* Users don't want to see irrelevant hits
* Search wont scale
* Wont make money :-)
and
Subconciously some small part of the brain says I can trust the search.
Interestingly, one of the key decision point for any client-side email product is whether it should extend outlook (i.e. plugin), or an independent product (in which case, it better look like outlook) - as bloomba is. Downside for the latter is that you are forced to integrate calendar and everything else that outlook provides: In essence replicating most functionality of email. (It is actually a good thing for industry and innovation.)
And bloomba seems to have got the right chord: Search being so important (and unoptimized in a most versions of outlook), it will be perceived as completely independent capability. And possibly helped by mental mapping to google. So bloomba is in much better position to succeed.
And perhaps those who want to mine email data should partner with bloomba! You will hopefully not have to struggle with all those email file formats, and depend on bloomba ecosystem for the same.
Process is an embedded reaction to prior stupidity.
and
A wiki in the hands of a healthy community works. A wiki in the hands of an indifferent community fails...
And an example he gives: some graffity entries on site http://wikitravel.org/ were removed in less than two hours.
Ben Hide has some interesting counterpoints on why processes are required:
The challenge in making a community that functions well is creating something out of those talents that is closer to the maximum over the diverse talents rather then the maximum of their lack of skills.
Processes are required to create a "minimum standard" for a particular task, especially when a person is new to task. These are available through checklists and templates, references collected etc. in previous similar tasks. In my experience, I have seen that if the task we are performing has "wiki-enabled" people, there are lot of optimizations: We already work against checklists, quickly gather relevant details in one place, and in general, meticulously manage the information through the task.
But if there were other people, it is usually a fallback on email-enabled communication (and a lot of meetings thrown in). While we can't really compare the quality of the results, the overall experience is bit more taxing, involves more time. There is never a sense of "having taken care of it all" and "Am I missing something?". I have sometimes forgotten simple tasks such as spell-checking after "queuing" it to be done before submission.
I used to often forget the mobile charger before an important travel. But now, a 'travel checklist' that sits in my home wiki takes care of these things!
The idea case is of course to mark posts with one or more labels. And let the users subscribe to one or more labels.
This is specially true of sites such as blogger or Always on which have many members and publish multiple blogs.
The algorithm for selecting blogs will now become bit more compicated; rather than just pulling in the URL of feed, you would use a subscription-wizard to browse through a list of blogs and subscribe to the ones matching the labels you are interested in.
But is there more practical use for this? With so much of information around, labeling and selection based on labeling are going to be key issues. After all, labeling is not necessarily efficient when performed at the time of creating / accessing content; it is done when content is found useful - typically when you search for information in particular context. And such a labeling process is performed assuming the information will be useful later, and it may not.
But collaborative labeling is indeed useful: If I label some information, it would be useful to everyone else. But what should be the infrastructure for it? For example, gmail allows labels to be put on emails. But: (1) I need to create my own list of labels; can't borrow it from some generic list available off the web, and customized by me (2) I can't send this labeling information along with email I create, so it is duplicate work by recipient (3) If the sender adds new labels to a thread of communication after emails were received, there is no way to share them.
Of course, automatic categorization and better search techniques might remove the need for labeling in the first place.
Perhaps more debate is needed on these lines.