Cavenet Version 2

Plan for improved personal content management network.

* Version 1 overview

Distinct "filesets" (file tree) defined per confidentiality level
("green", "black"), stored discontiguously on host file system, but
links create virtual unified file system ("cave").

Master copies housed on SDF cluster. Local workstation copies
synchronized with rsync (iza, lixi; potentially shiro, jii). Updates
to master copies also supported.

Public fileset served via Gopher or HTTP (2 paths) from SDF cluster
(master copy).

** Shortcomings

- Deletions must be propagated manually.
- Synchronization can only be initiated manually. (Automatic
  synchronization desirable for infrequently visited publishing
  hosts.)
- Low differentiation of served content. (Greater number of more
  tightly focused sites might be more attractive to readers. However,
  this might be achieved by publishing links to "portal rooms" within
  cave complex and giving each distinct styling.)
- Some potential nodes don't support rsync.
- No update collision detection.
- Underutilized resources (Metaarray, VPS, Polarhome, ...).
- A lot of published content not in datasets.
- No comprehensive interface to published content.

Concerning backups, multiple copies of datasets on mutually remote
hosts is probably adequate without additional copying to other
media. However, I would like to have a system for backing up resources
not included in a dataset.

** Conclusions

Version control is not really a problem. Programming the scripts and
batch jobs needed to propagate file deletion is probably more
efficient than migrating to a full version management system.

What I really need is to clean up and organize published content and
hosts.


* Version 2 brainstorm

- Divide current "green" content into 2 subsites:

  1. Cave of Secret Wizardry :: Programming, computers, and gaming

  2. Plato's Cave :: Literature, politics, philosophy, religion

  (These could remain a part of the cave system by adding "portal"
  rooms and blogs for each subsite.)

-