Monday, February 28, 2005

resumed work on web (finally!), class

This morning I reviewed the web interpreter (webi.py), did a little refactoring and some testing. I don't think it will take too long to bring it up to the same level as the wiki code. After that, it needs to support dataset navigation.

Meanwhile, I'm starting class again--only two days this time. The class is in project management. (Sounds like fun!) So CompStrm is again put on the backburner, if only for half a week. Well, at least it was a productive weekend--so I've got no real complaints.

Sunday, February 27, 2005

a day refactoring wiki, thoughts on references

Had a good solid productive day debugging and refactoring the wiki logic.

I've also been thinking about (symbolic) references. As implemented, they will not work when page names are extended with dataset qualifiers. (E.G. {MainPage/Main}.) But that's going to take some real work, as none of the tkcs data structures are a good fit. ;-( The complication is that I need a sorted key/value list that changes over time. So I'll need to go back to the tks level and see how to implement it there, as I did with dictionaries.

Meanwhile, I need to bring the web code up to date. (I've said this before, I know.)

Back to Alpha

As posted on the CompStrm Forum:

I've changed the CompStrm project from Beta back to Alpha.

The primary reason for this is that the data migration technique I've been using has not preserved historical data. And that is a major failure in a system that supports navigation through time.

Sharing also needs considerable work. The biggest problem with sharing is that DMP file processing is strictly change based. Rather, it needs to work at a higher-level, recognizing activities like delete file so that properties and references local to the receiving system can be deleted.

Having reverted to Alpha, data migration will no longer be supported to new releases. I expect CompStrm will remain in Alpha for several months. My hope and expectation is that the CompStrm project will, when it again reaches Beta, be much more mature.

The failure here, and I do view this as a failure, is because of the uniqueness of time-based systems and a universal lack of experience in dealing with them. In retrospect, the simple-minded approach previously taken with data migration is obviously inadequate. It is to minimize the effect of this failure that I am returning this project to Alpha.

Bill la Forge

Saturday, February 26, 2005

wiki working, strange conditions

Got the Wiki working, but I'm not entirely satisfied. The problem is all rooted in the cludge, limiting pages to those in the Main dataset.

The problem is that a wiki page is first created, then added to a dataset. So at the time of creation, a wiki page is not in the Main dataset. Ouch! So I've cludged things up to work. I expect to pull out all the cludges later.

Hmm. Part of the problem is I'm tracking things by well known name, which means lots of duplicates across multiple datasets. But if I'm viewing a page and want to see its journal, I've got a better way to identify the page: file name.

The advantage here is that when accessing by file name, you're not qualifying by dataset. So the cludge effect is limited.

Something to try anyway. This is a learning experience, as it is the first time I've tried to do an intersection search over time. I thought this was going to be easy!

Friday, February 25, 2005

Back to the Wiki, reference

Well, I thought I was ready to start in on the web code. Indeed, I did. But then I realized that the Wiki code was still in poor shape. So I spent some time cleaning it up. Of course, now it doesn't work. ;-/

References are starting to get complicated. If I reference X and X is in a different dataset, is that a reference? For the moment the answer will be NO. But I expect the answer will get a bit more complex over time.

Now, what about qualified references? How do we reference a wiki page in a particular dataset? Of course this isn't going to come up for a while--Wiki is currently hard-coded to the Data/Main data set.

Thursday, February 24, 2005

only latest data preserved

Well, I've got the Wiki working again. But there is a major restriction for historical data--it can not be accessed.

The problem is that only current meta data is used. Data stored in properties that do not exist can not be accessed. And only "latest" data got converted.

Definately puts a crimp in things.

The lesson here is that to preserve historical data, you need to convert the .dmp file, rather than operating on just the current state of the database.

citations now working

I had forgotten to convert the reference property:

map WikiRef addWellKnownRef
destroyWikiRef

With that done, there are still some bugs left. But I'm short of time this morning, so its gotta wait 'till tomorrow.

Wednesday, February 23, 2005

converting old wiki data to well knowns

I've got the code rought out for moving the old wiki data forward. Seems to work OK but, now that I've converted my old test data, I'm finding some bugs in the updated Wiki code. ;-/

Here's the script for updating a wiki page:

rename %random%
set name . WikiTopic
assignWellKnownName %name%
destroyWikiTopic
publishWikiDoc wiki/tmp.txt
loadWellKnownDoc wiki/tmp.txt
destroyWikiDoc
set page %path%
cd %mainds%
addDataSetItem %page%

Tuesday, February 22, 2005

something nice at the office

Got a new desktop at the office this week-- a blade 2500 with 2 GB of RAM.

Today I installed Solaris 10 on it. (It still comes with Solaris 8 and 9.)

Solaris at home

When we came to Raipur, we traveled light. I brought only my laptop and one PC, leaving my other two PCs and assorted gear in Raipur.

I've now managed to install Solaris 10 (the X86 version) on that PC. Eventually, I'd like to use this system to test CompStrm. (Its a background project.) But other things will come first--I still need to install a firewall and anti-virus software, and add some VPN software for accessing Sun's internal network. Then I've got to download and compile Python and Twisted.

I figure this is a good learning exercise for me, as I'm quite weak on Unix--its just been too many years.

the next big step?

After sharing has been enhanced (and it needs a lot of work) and we have a full web interface for applications, I'm thinking that the next big step might be to implement the distributed trust model of QARE. This would in turn facilitate some interesting distributed processing.

back to it

Tested the Wiki a bit this morning. Found/fixed a bug.

Next is data migration and then the web logic.

Sunday, February 20, 2005

J2EE vs CompStrm

Having spent a week in J2EE class, it is easy to be seduced by the architecture. It is well structured, and highly scallable. And very rich, addressing a range of transactional needs, while also providing reasonable security measures. Its scope is comercial enterprises, and it is a good fit.

The heart of J2EE is a directory server (LDAP), running both JNDI and the Identity server. The JNDI server is used to locate and configure EJBeans, while the Identy server handles users, preferences and role assignments. It is very much a centralized architecture.

CompStrm is driving towards an inverted architecture, where each user has control of his/her own system, but can share structures or streaming structures with other users.

An eariler version of CompStrm already did this. But without the added dimension of time, and only for passive structures (like Wiki pages). This time around, I am adding (application) logic to the sharable structures.

But sharing should be done safely. And there are better security models for sharing data and logic separately, with the logic deciding which data becomes which type of object rather than having that information coded in the data.

One thing J2EE does not give much attention to is movement of data/logic across trust domains. Rather, you have one large trust domain and centralized control over deployment.

In contrast, a CompStrm network would be a number of small trust domains, which provide much more flexibility and should be easier to secure, providing the architecture properly addresses those cross-trust domain issues.

512kbps dsl in Bangalore ;-)

Yesterday (Saturday) Airtel gave me a 512kbps dsl connection. Very nice. And paid for by my employer, Sun Microsystems. Max bytes transfered is 4GB/month, after which I must pay Rs1.4 per MB. (About $33/GB.)

So we bought some speakers and a hub (the Airtel-provide router only had one port), a powerstrip, keyboard and mouse (keyboard and mouse were left in Raipur, along with a nice eithernet switch). Eventually I plan to be running Solaris on my desktop.

Toys are always fun, as long as they work. And a little internet radio is helpful when living in Bangalore. (Of course, there's satalite radio here too.)

RS 175 Keyboard
100 Mouse
550 900 Watt speakers with woofer
100 Power strip with surge supressor
1050 8-port hub
------
RS1975 (About US$47)

Saturday, February 19, 2005

A rough week

Its been 3 years since I did any Java, and I've never used Sun's Studio product nor J2EE. So this 5-day class (Tuesday-Saturday) in J2EE has been an uphill climb--especially the labs. (The theory has all been fun.)

As today (Saturday) is my last class, its pretty much taken care of. I might get back to CompStrm tomorrow, but its more likely that I'll take the day off.

Tuesday, February 15, 2005

in class all week

I'm in class all day every day this week, including Saturday. I'm just out of bandwidth--CompStrm is on hold for the moment. ;-(

But its an interesting class--J2EE. Haven't looked at J2EE since the initial white paper. Its come a long way since then. ;-)

Sunday, February 13, 2005

Wiki seems to be working!

I am delighted to see the Wiki working again. I did cheat--its hardcoded to work with the Main DataSet. But methinks the next step is to write a conversion script so that I can access all my old test data.

(I'll further note that I have not yet worked on the web access code. So there's still pleanty to do.)

Friday, February 11, 2005

integrating sample apps with wiki pages

Are the relation, person, telephoneNumber and todoList files completely distinct from wiki page files? If they were extensions of a wiki page, unified access might be easier.

Is there any problem in attaching a document (and references) to all of these file types?

We could then say that we have unified everything as a Wiki page, but some types of pages have extra attributes that we can operate on.

Going a step further, What about making a DataSet file a Wiki Page too? Now it sounds like we can attach a document to any WellKnown file.

Seems reasonable.

Now, should references be just based on the document content, or should it include any reference to a WellKnown? (That is to say, need we update references in the relations1 application? I'm thinking that the references property defined on a wiki page should ONLY be an index of the document. Keeps things simpler.

Thursday, February 10, 2005

Python 2.3.5 final

Time to upgrade to Python 2.3.5.

I'll note that Twisted is still not providing downloads for Python 2.4.

Wednesday, February 09, 2005

a small start on the wiki

I've updated the wiki install and test data scripts. ;-)

Of course this means the wiki and web servers are now broken. ;-(

Once they are fixed up, I'll need to write a conversion script for migrating old wiki data forward.

And yes, I am looking forward to finishing all this.

Navigating DataSets Tutorial

Navigating DataSets


WellKnown files are first-class user objects. WellKnown files have a name,
which need not be unique, and which is assigned to the !WellKnownName property.


Virtually every application will define one or more types of WellKnown files. To
facilitate sharing between systems, the file name of a WellKnown file is usually
a system generated random string assured of being unique.

DataSet files are used to hold a collection of WellKnown files using the !DataSetItem property. The WellKnown files in a DataSet may be of more than one type, and may even be of types defined by more than one application. A WellKnown file may also be held by more than one DataSet. DataSet files can also be WellKnown, allowing the user to assemble DataSets into trees (or bushes). However, circular structures are not allowed. The root DataSet file is "Data".

WellKnownDirectory files are directories which hold a single type of
WellKnown file. The sh operation is defined on WellKnownDirectory files to list the directory content with the WellKnown names. (The dir command is particularly useless here, as the file names are difficult for a user to work
with.)

The sh operation defined on DataSets also lists its content with WellKnown names. Conversly, the shds operation defined on WellKnown files lists the WellKnown names of the DataSets holding the WellKnown file.

WellKnownDirectory files are found in the /WellKnownDirectories directory.
DataSets are found in the /WellKnownDirectories/DataSets. And the
"Data" DataSet can be found at /WellKnownDirectories/DataSets/Data.
(This DataSet does not have a generated file name--both its file name and
WellKnown name are "Data".)


tki> dir
1. !Dictionaries, A Directory
2. Applications, A Directory
3. Capabilities, A Directory
4. Descriptors, A Directory
5. Dictionaries, A Directory
6. Loaded, A Directory
7. Maps, A Directory
8. Operations, A Directory
9. Properties, A Directory
10. Text, A Directory
11. Transactions, A Directory
12. Users, A Directory
13. WellKnownDirectories, A Directory

tki> cd #13

tki> dir
1. DataSets, A directory containing WellKnown files.
2. Phonebook1Numbers, A directory containing WellKnown files.
3. Phonebook1People, A directory containing WellKnown files.
4. Relations1, A directory containing WellKnown files.
5. ToDo1, A directory containing WellKnown files.

tki> cd #1

tki> dir
1. Data, A DataSet file.
2. ILt0h4cNLqabB+cUEXtj415L, A DataSet file.
3. avT4K-tvnXx3F5n4Q-EzKF+4, A DataSet file.
4. m7WCILr6IXS34nBgZRHKUOSb, A DataSet file.
5. vgzxhd+jgrJT6QU2PDFmJht6, A DataSet file.

tki> sh
1. People
2. Phones
3. Phonebook
4. Relations
5. Data

tki> cd #1

tki> sh
1. Zikes, Sam
2. Smith, Joan
3. Smith, John

tki> shds
1. Phonebook

tki> cd #1

tki> sh
1. People
2. Phones

tki> shds
1. Data

tki>

Tuesday, February 08, 2005

started tutorial for Navigating DataSets

I got a good start today on a new tutorial, Navigating DataSets. Should be wrapped up in one more session.

After that its time to look again at the Wiki.

Monday, February 07, 2005

tutorial updates completed

Finally finished the updates to the tutorials.

I'm thinking that I should add a new tutorial, navigating DataSets. Then the tutorials would truely be up to date.

Sunday, February 06, 2005

navigatingThroughTime, finally

Finally finished the time tutorial update. Had to add some dataset navigation in the process.

Things have been going slow. Problems with my neck, mostly. Working on a laptop aggrivates it.

Hopefully the rest of the tutorials will go more quickly--I'm looking forward to working on the wiki again.

Thursday, February 03, 2005

tutorials well begun, but not half done

I updated the tutorials for navigating by type and navigating by group today. Lots of changes! ;-(

At this stage, I am only updating old code. At some point, I will also need to write a tutorial on navigating by dataset.

Wednesday, February 02, 2005

Phonebook1 working, English classes

Finally got Phonebook1 working. Next step is to review all tutorials and update any references to obsolete code.

On a personal note, my wife Rupali begins English classes today. Three months, 6 days a week, 2 hours a day for RS.1800 (about US$43).