By no means do I disparage Eric's work so far - please nobody take it that way! I just (kinda sorta strongly) feel that there's more to it, and PM could be that more.
Maybe, but adding secure 'storage' to PMs list of features will in no way solve Tyr's particular problem either.
Unless syncing (to FTP/online PM) worked now, or he had PM in his PDA (with the correct settings - that probably means syncing too; otherwise it's manually setting the same settings in both places - error prone)
He's already said he'll be encrypting the "pre-defined password" field
Haven't heard of this feature... whats that?
Not sure where it is around here - somewhere there was a discussion of keeping existing passwords (not generated by PM) in the "prefix" field. (actually, that's been in a few places... :)
Eric brought up that the prefix field is stored in clear, and that he'll (or he could?) change that so it's encrypted. If I remember correctly, he said something about creating a separate field for this purpose, and encrypting that too.
I *think* it was in the same conversation that he said he could encrypt the Description field, so when it's multi-line (and I'm using it for PINs, Social Security numbers, frequent-flyer numbers, etc) it'll be encrypted.
I'll still argue that I don't think multiple replication would be that difficult given the small amounts of data we're dealing with (as opposed to, say, a credit-card database, or source-code files for a large project). But I don't think it's so important that it's needed, either.
It's not the amount of data that is at issue - it is the functionality itself. You can get this kind of functionality with an Enterprise Oracle Database - but at what cost?
PostgreSQL has an Replication Server, but have no idea the extent of its capabilities.
Anyway, we're a ways from this, so there's plenty of time for debate.
Oh, my! Are you thinking the *servers* would sync amongst themselves? I supoose that's possible, but I wasn't thinking that - I was thinking I'd sync my *notebook* against server A, then server B, then server C...
I can see the usefullness of server-to-server automagic syncing, but I can also see cases where your server can't *get* to mine (maybe my ISP won't allow external servers access to its PM backup server?)
The reason I brought up data size is that it changes the equation - for example, if we've only got three passwords (account records in PM) then it's trivial to sync amongst however many machines you've got - you get something like this:
server A: my record 1 is at change number 5.
server B: mine's at 6, here's the new one.
server A: my record 2 is at change number 3.
server B: so's mine.
server A: my record 3 is at change number 17.
server B: mine's at 16, send record 3.
Actually, that could be the same conversation between notebook and server; doesn't matter.
Then simply repeat the entire sequence for the next server:
server A: my record 1 is at change number 6.
server C: mine's at 5, send record 1.
server A: my record 2 is at change number 3.
server C: so's mine.
server A: my record 3 is at change number 17.
server C: mine's at 23, here's record 3.
Then server A syncs the same way with servers D, E, F, and G.
Server A now needs to call B up and update record 3 (or can simply remember that *something's* old on B and do a full sync with B).
server A: my record 1 is at change number 6.
server B: so's mine.
server A: my record 2 is at change number 3.
server B: so's mine.
server A: my record 3 is at change number 23.
server B: mine's at 17, send record 3.
If F (for example) also had newer data than A during the initial first pass, then A would re-sync with B, C, D, and E. (G's already ok 'cause it was sync'd after F's changes)
Now, A, B, C, D, E, F, and G are all in sync.
If (for example) B changed A again during the second pass, then A would re-sync with C..G.
It's best if A syncs with all servers first, and then goes back to re-sync; that way after the first sync, A has all the up-to-date info, so unless there are new changes then it's only two passes.
Expand it to 1,000 records - doesn't matter; same thing works - but only because the data is relatively stable; it's not constantly changing under your feet. (nobody's going to be syncing their notebook and changing the desktop data and syncing that and then running to the notebook and changing it and syncing that again - over and over again)
It's quite different from a huge active database, where there'll be many transactions at the same or other servers during the time a single sync takes place. For those, certainly you need Oracle-level horsepower. For this, not needed at all - first, it's highly unlikely there'll be more than one machine syncing at a time (meaning me syncing my desktop to server A and my notebook to server B at the same time, each with different changes) so typical case is one-pass, occasional two-pass. Incredibly rarely (and probably only when intentionally coordinated) more than two passes.
I don't see *any* need for the database manager to handle syncing real-time data with other servers! (hot backups of the database itself are another story - but that's not accessible to the user-sync, so that's not a problem either)
But, as you say (and I agree) - there's plenty of time before this!
(ain't it fun to argue, though? :)
(the term I use for this is "arguetect" - we argue back-and-forth until the design is architected... : )
- Al -