For users of LucidDB with very large data volumes and very simple
schema/query patterns, I've put together a doc on ways to break up the
data (across multiple tables on a single server to enable load
paralellism, or across multiple servers to enable primitive clustering):
I am scoping what would be required to scale out an existing data warehouse built with LucidDb. Using distributed horizontal partitioning it seems fairly straight forward.  Buy a new server,  add the schema to the new server,  create a sys_jdbc foreign wrapper on the new sever that points at the old server,  use the new server as a coordinator by creating a view that unions the new local tables with the old remote tables,  point all queries at the new server.
The problem is step . There is no way of doing a schema only dump of the existing server to build up the new server. In a perfect world I would have an exact copy of the sql needed to clone a server's schema. In fact I have attempted to do this but I am not confident every change has been incorporated 100% correctly. As maintenance occurs over time this is very error prone way of doing things.
Ideally I would like to do a schema only dump of the old server and then do a simple regex replace where needed before running it through the new server.
Do you have any suggestions for scaling out a LucidDb system? Right now the only solution I have is to reverse engineer each table and view using the sys_root commands. Is there any chance a schema only dump is in the works?
> Do you have any suggestions for scaling out a LucidDb system? Right now the
> only solution I have is to reverse engineer each table and view using the
> sys_root commands. Is there any chance a schema only dump is in the works?
re: scaling out.
I think the best option, given your list ( to ) is Firewater. Firewater, albeit in a very early release stage does the steps you outlined and even does some nice query decomposition/federating to slave LucidDB nodes. Firewater binaries aren't being built for the community, but you can build it yourself; we've helped a couple of DynamoBI customers evaluate Firewater.
re: schema only dump.
I can't believe that we didn't have a Jira for this. We've long talked about making UDF/UDX accessors for the already existing java code that can generate DDL from the objects (similar to your manual method but better). I've added a Jira case for it:
http://jira.eigenbase.org/browse/FRG-405 Useful for sure... If you'd like to pick it up, and create the UDF/UDXes I'd be happy to shepherd it through as a contribution/commit. We've heard this request before so I'm sure that it will get addressed at some point, naturally on its own as well.
This SF.net email is sponsored by