Firstly thanks for the fantastic tool and support for what is an extremely interesting idea (and implementation).
I'd be interested as a AM newbie to see a quick-reference guide for those coming from a traditional data modelling background, to show the appropriate modelling for common structures/situations. For example a simple attribute of an entity is easy, an attribute that is a lookup is also easy, but a little harder to see the best way to model a many-many relationship resolved by a linking table that has other attributes of its own.
I'm not sure what the cardinalities should be for this situation (maybe I missed it in the tutorials).
My second request is to know whether anyone has implemented a large transactional database system using this approach. It's attractive for a project I have, for many theoretical reasons, but I'm concerned that this (3000-user, though not hugely frequent transactions) project may suffer performance-wise in this treatment. What do you think?
Yes, there are examples of large transactional database systems using this approach, but unfortunately I am bound by NDA:s not allowing me to provide any specifics.
The reason why performance does not suffer (much) is that transactional queries tend to be very "narrow". For example, if you want to retrieve the information for a single entity, say a customer, then once the surrogate identity of that customer is established the clustered indexes for each table can be used to retrieve the information. You will of course have more index lookups than a single table, and also disk spindle movement if you're not on SSD:s or have intelligently parittioned your data. So, some performance will be lost compared to having it all in a single table, but not much.
In practice, there are probably very few scenarios in which this small loss of performance outweighs the benefits of having a flexible and extendable model capable of handling of historical data.
I agree that some sort of quick reference is a good idea. I'll see what I can do in that area as soon as possible. It may still be a while though as other things have higher priority at the moment.
My previous employer is using very similar datamodel - in a very active environment. Only difference with their P2000-model and AnchorModel is, that in P2000-model attributes are not totally 6NF, but instead they are grouped according usage many attributes in a single table.
However, tables are still rather slim and yes, I'm rather willing to say, that 3000 users are not an issue with a database designed according the Anchor Model.