Quantcast

LucidDB for Spatial OLAP

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

LucidDB for Spatial OLAP

Paul Ramsey-2

Hi folks,

We are going a project for which Lucid and OLAP tools look like an
excellent choice.  It goes something like this:

- Divide the province of British Columbia up into 100M equally sized
squares.
- For each square, measure a few hundred different environmental and
topographic variables.
- Allow people to summarize information about the province by
arbitrarily grouping up the squares.

In OLAP terms it means we will have a system with between 100M and 200M
facts, 50-100 or so dimensions and 50-100 or so measurements.

As you can imagine, working with transactional databases is starting to
get unwieldy.  We found Lucid and tried to give it a go, but have been
stymied at the data loading stage. I'll leave it to my colleague to
describe our particular environment and techniques.

Paul

--

   Paul Ramsey
   Refractions Research
   http://www.refractions.net
   [hidden email]
   Phone: 250-383-3022
   Cell: 250-885-0632


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Emily Gouge
All,

I've been testing out the LucidDB instance for the project Paul has described.  I went through the
ETL tutorial and got all the examples to work.  And then I moved on to trying to get some data
loaded from our PostgreSQL 8.1.1 database loaded into the Lucid environment and have run into a few
issues.

1.  The first challenge I came across was that Postgres usually names everything in lower case.
However the Lucid interface (or maybe this is a jdbc thing) converted everything to upper case.  So
when I did a

import foreign schema habc
from server habc_link
into habc_extraction_schema;

no tables showed in the habc_extraction_schema.  "habc" schema in our postgesql database has many
tables, but there was no upper case "HABC" schema; so no tables/views were found.  My work around
for this was to create upper case schema and upper case views (with upper case column names) in
Postgres.  Is there a simpler way to do this?


2.  The second, larger problem I had, is that the Lucid server crashes when I try to load in large
amounts of data from our postgresql database.  The script I used to load data and the resulting
error message are listed below.  The master_grid table I am trying to load from contains approx.
270,000,000 rows (and about 50 columns; approx 30G of data).  If I make a subset of the table that
is approximately 1 million rows (and 5 columns) I can load that data fine.  Any ideas on how to
resolve this issue?

We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal v2.6.9].  Java Version: 1.6.0_01

Thanks!
Emily

SAMPLE LOADING SCRIPT:

--create server link
create server habc_link
foreign data wrapper sys_jdbc
options(
   driver_class 'org.postgresql.Driver',
   url 'jdbc:postgresql://dbserver:port/dbname',
   user_name 'user'
);

--create transformation schema
create schema habc_transformation_schema;

--import the postgresql habc schema
import foreign schema habc
from server habc_link
into habc_extraction_schema;

--the postgresql habc schema has a master_grid table
create view habc_transformation_schema.location_view as
select x,y
from habc_extraction_schema.master_grid;

create schema habc;

create table habc.location_dimension(
     loc_key int generated always as identity not null primary key,
     x integer not null,
     y integer not null,
     unique(x,y)
);

--This is where the data is loaded and cases the server to crash;
insert into habc.location_dimension (x,y)
select x,y from habc_transformation_schema.location_view;

ERROR MESSAGE:

#
# An unexpected error has been detected by Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0xb4dba792, pid=31847, tid=2727599024
#
# Java VM: Java HotSpot(TM) Client VM (1.6.0_01-b06 mixed mode, sharing)
# Problematic frame:
# C  [libfennel_btree.so+0x1d792]  _ZN6fennel11BTreeReader9endSearchEv+0x12
#
# An error report file with more information is saved as hs_err_pid31847.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#
*** CAUGHT SIGNAL 6; BACKTRACE:
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x37)
[0x179f7]
/lib/tls/libpthread.so.0 [0xa01898]
/lib/ld-linux.so.2 [0x7227a2]
/lib/tls/libc.so.6(gsignal+0x55) [0x7677a5]
/lib/tls/libc.so.6(abort+0xe9) [0x769209]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x630358b]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x63ae3c1]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so(JVM_handle_linux_signal+0x1f0) [0x63079c0]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x6305278]
/lib/tls/libpthread.so.0 [0xa01890]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_lu_colstore.so(fennel::LbmSplicerExecStream::closeImpl()+0x28)
[0x62718]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_exec.so(fennel::ExecStreamGraphImpl::closeImpl()+0x26b)
[0x51d4b]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfarrago.so(Java_net_sf_farrago_fennel_FennelStorage_tupleStreamGraphClose+0x170)
[0xb4f45f30]
[0xb5d0267e]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfb379]
[0xb5cfb213]
[0xb5cfb379]
[0xb5cfb14d]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfad37]
[0xb5cfb213]
[0xb5fccc81]
[0xb5fce0d3]
[0xb5cfad37]
[0xb5cfb379]
./lucidDbServer: line 9: 31847 Aborted                 ${JAVA_EXEC} ${JAVA_ARGS}
com.lucidera.farrago.LucidDbServer



Paul Ramsey wrote:

>
> Hi folks,
>
> We are going a project for which Lucid and OLAP tools look like an
> excellent choice.  It goes something like this:
>
> - Divide the province of British Columbia up into 100M equally sized
> squares.
> - For each square, measure a few hundred different environmental and
> topographic variables.
> - Allow people to summarize information about the province by
> arbitrarily grouping up the squares.
>
> In OLAP terms it means we will have a system with between 100M and 200M
> facts, 50-100 or so dimensions and 50-100 or so measurements.
>
> As you can imagine, working with transactional databases is starting to
> get unwieldy.  We found Lucid and tried to give it a go, but have been
> stymied at the data loading stage. I'll leave it to my colleague to
> describe our particular environment and techniques.
>
> Paul
>



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

John Sichi
Administrator
Emily Gouge wrote:
> import foreign schema habc
> from server habc_link
> into habc_extraction_schema;
>
> no tables showed in the habc_extraction_schema.  "habc" schema in our postgesql database has many
> tables, but there was no upper case "HABC" schema; so no tables/views were found.  My work around
> for this was to create upper case schema and upper case views (with upper case column names) in
> Postgres.  Is there a simpler way to do this?

Yes, LucidDB supports the SQL standard for using double-quotes around
any identifier to preserve case, so:

import foreign schema "habc"
from server habc_link
into habc_extraction_schema;

> 2.  The second, larger problem I had, is that the Lucid server crashes when I try to load in large
> amounts of data from our postgresql database.  The script I used to load data and the resulting
> error message are listed below.  The master_grid table I am trying to load from contains approx.
> 270,000,000 rows (and about 50 columns; approx 30G of data).  If I make a subset of the table that
> is approximately 1 million rows (and 5 columns) I can load that data fine.  Any ideas on how to
> resolve this issue?
>
> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal v2.6.9].  Java Version: 1.6.0_01

There have been a lot of bugfixes and enhancements (like support for
concurrent read/write) checked into Perforce since the 0.6.0 release in
January.  The crash below looks like an error unwind problem which has
been fixed.  This means there's probably some other earlier error logged
before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log.  Could
you mail the contents of that file to this list (or enough of the tail
to show what happened before the crash)?  If we can figure out what's
causing the ealier error, you may be able to get past this without a new
version.

If not, the latest code is stable enough to put out an 0.7 release
within a few days to see if that resolves the problem.

(Note that as far as I know, most testing up until now has been on Java
1.5.)

JVS


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Emily Gouge
Thanks for pointing out the double-quote solution.

I've attached the Trace.log file and I'll try it again on 0.7 when it is released and let you know
if I continue to have problems.

Thanks for your help.

Emily

John V. Sichi wrote:

> Emily Gouge wrote:
>> import foreign schema habc
>> from server habc_link
>> into habc_extraction_schema;
>>
>> no tables showed in the habc_extraction_schema.  "habc" schema in our
>> postgesql database has many tables, but there was no upper case "HABC"
>> schema; so no tables/views were found.  My work around for this was to
>> create upper case schema and upper case views (with upper case column
>> names) in Postgres.  Is there a simpler way to do this?
>
> Yes, LucidDB supports the SQL standard for using double-quotes around
> any identifier to preserve case, so:
>
> import foreign schema "habc"
> from server habc_link
> into habc_extraction_schema;
>
>> 2.  The second, larger problem I had, is that the Lucid server crashes
>> when I try to load in large amounts of data from our postgresql
>> database.  The script I used to load data and the resulting error
>> message are listed below.  The master_grid table I am trying to load
>> from contains approx. 270,000,000 rows (and about 50 columns; approx
>> 30G of data).  If I make a subset of the table that is approximately 1
>> million rows (and 5 columns) I can load that data fine.  Any ideas on
>> how to resolve this issue?
>>
>> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal
>> v2.6.9].  Java Version: 1.6.0_01
>
> There have been a lot of bugfixes and enhancements (like support for
> concurrent read/write) checked into Perforce since the 0.6.0 release in
> January.  The crash below looks like an error unwind problem which has
> been fixed.  This means there's probably some other earlier error logged
> before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log.  Could
> you mail the contents of that file to this list (or enough of the tail
> to show what happened before the crash)?  If we can figure out what's
> causing the ealier error, you may be able to get past this without a new
> version.
>
> If not, the latest code is stable enough to put out an 0.7 release
> within a few days to see if that resolves the problem.
>
> (Note that as far as I know, most testing up until now has been on Java
> 1.5.)
>
> JVS

May 1, 2007 12:28:10 PM net.sf.farrago.db.FarragoDbSingleton pinReference
INFO: connect
May 1, 2007 12:28:10 PM net.sf.farrago.db.FarragoDatabase dumpTraceConfig
CONFIG: # Tracing configuration

handlers=java.util.logging.FileHandler
java.util.logging.FileHandler.append=true
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter

java.util.logging.FileHandler.pattern=/mnt/lucid/luciddb-0.6.0/trace/Trace.log

.level=CONFIG

May 1, 2007 12:28:14 PM net.sf.farrago.catalog.FarragoMdrReposImpl <init>
INFO: Catalog successfully loaded
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase <init>
CONFIG: java.class.path = /usr/java/jre1.6.0_01/lib/tools.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/jmi.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/jmiutils.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/mdrapi.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/mdrjdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/mof.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/nbmdr.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/openide-util.jar:/mnt/lucid/luciddb-0.6.0/lib/janino.jar:/mnt/lucid/luciddb-0.6.0/lib/eigenbase-resgen.jar:/mnt/lucid/luciddb-0.6.0/lib/eigenbase-xom.jar:/mnt/lucid/luciddb-0.6.0/lib/openjava.jar:/mnt/lucid/luciddb-0.6.0/lib/RmiJdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/csvjdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/sqlline.jar:/mnt/lucid/luciddb-0.6.0/lib/jline.jar:/mnt/lucid/luciddb-0.6.0/lib/jgrapht-jdk1.4.jar:/mnt/lucid/luciddb-0.6.0/lib/jgrapht-jdk1.5.jar:/mnt/lucid/luciddb-0.6.0/lib/jgrapht7-jdk1.5.jar:/mnt/lucid/luciddb-0.6.0/lib/hsqldb.jar:/mnt/lucid/luciddb-0.6.0/lib/postgresql-8.1-406.jdbc2.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-transaction-1.1.jar:/mnt/lucid/luciddb-0.6.0/lib/vjdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/vjdbc_server.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-logging.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-pool-1.2.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-dbcp-1.2.1.jar:/mnt/lucid/luciddb-0.6.0/lib/farrago.jar
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase <init>
CONFIG: java.library.path = /usr/java/jre1.6.0_01/lib/i386/client:/usr/java/jre1.6.0_01/lib/i386:/usr/java/jre1.6.0_01/../lib/i386:/mnt/lucid/luciddb-0.6.0/lib/fennel:/usr/java/packages/lib/i386:/lib:/usr/lib
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cachePageSize=32768
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cachePagesInit=5000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cachePagesMax=5000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cacheReservePercentage=5
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseIncrementSize=1000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseInitSize=1000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseMaxSize=0
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseShadowLogIncrementSize=1000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseShadowLogInitSize=2000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseTxnLogIncrementSize=1000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseTxnLogInitSize=2000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter expectedConcurrentStatements=4
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter forceTxns=true
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter groupCommitInterval=0
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter jniHandleTraceFile=
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter resourceDir=/mnt/lucid/luciddb-0.6.0/catalog/fennel
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter tempIncrementSize=1000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter tempInitSize=1000
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter tempMaxSize=0
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseDir=/mnt/lucid/luciddb-0.6.0/catalog
May 1, 2007 12:28:14 PM net.sf.fennel.database <native>
WARNING: recovery required
May 1, 2007 12:28:14 PM net.sf.fennel.database <native>
INFO: recovery beginning; page version = 288
May 1, 2007 12:28:14 PM net.sf.fennel.database <native>
INFO: recovery completed
May 1, 2007 12:28:14 PM net.sf.fennel.database <native>
INFO: opening database; process ID = 967
May 1, 2007 12:28:14 PM net.sf.fennel.database <native>
INFO: online UUID = 73789930-8149-4130-a2e5-d659d58a461e
May 1, 2007 12:28:14 PM net.sf.fennel.database <native>
INFO: database opened; page version = 289
May 1, 2007 12:28:14 PM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel successfully loaded
May 1, 2007 12:28:14 PM de.simplicit.vjdbc.server.rmi.ConnectionServer serve
INFO: Starting RMI-Registry on port 5434
May 1, 2007 12:28:14 PM de.simplicit.vjdbc.server.rmi.ConnectionServer serve
INFO: Binding remote object to 'VJdbc'
May 1, 2007 12:28:29 PM net.sf.farrago.db.FarragoDbSingleton pinReference
INFO: connect
May 1, 2007 12:28:29 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:32 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:32 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: drop server habc_link cascade
May 1, 2007 12:28:32 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: drop schema habc cascade
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: drop schema habc_transformation_schema cascade
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: create server habc_link
foreign data wrapper sys_jdbc
options(
   driver_class 'org.postgresql.Driver',
   url 'jdbc:postgresql://turtle:7654/habc',
   user_name 'egouge'
)
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: create schema habc_transformation_schema
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: import foreign schema habc
from server habc_link
into habc_extraction_schema
May 1, 2007 12:28:33 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: create view habc_transformation_schema.location_view as
select x,y
from habc_extraction_schema.master_grid
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession prepare
INFO:
select x,y
from habc_extraction_schema.master_grid
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: create schema habc
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: create table habc.location_dimension(
     loc_key int generated always as identity not null primary key,
     x integer not null,
     y integer not null,
     unique(x,y)
)
May 1, 2007 12:28:35 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:36 PM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 1, 2007 12:28:36 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:36 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:37 PM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 1, 2007 12:28:37 PM net.sf.farrago.db.FarragoDbSession prepare
INFO: insert into habc.location_dimension (x,y)
select x,y from habc_transformation_schema.location_view
May 1, 2007 12:29:20 PM org.eigenbase.util.EigenbaseException <init>
SEVERE: org.eigenbase.util.EigenbaseException: Failed to access data server for execution
May 1, 2007 12:29:20 PM net.sf.fennel.backtrace <native>
SEVERE: *** CAUGHT SIGNAL 6; BACKTRACE:
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x37) [0x179f7]
/lib/tls/libpthread.so.0 [0xa01898]
/lib/ld-linux.so.2 [0x7227a2]
/lib/tls/libc.so.6(gsignal+0x55) [0x7677a5]
/lib/tls/libc.so.6(abort+0xe9) [0x769209]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x630358b]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x63ae3c1]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so(JVM_handle_linux_signal+0x1f0) [0x63079c0]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x6305278]
/lib/tls/libpthread.so.0 [0xa01890]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_lu_colstore.so(fennel::LbmSplicerExecStream::closeImpl()+0x28) [0x62718]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_exec.so(fennel::ExecStreamGraphImpl::closeImpl()+0x26b) [0x51d4b]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfarrago.so(Java_net_sf_farrago_fennel_FennelStorage_tupleStreamGraphClose+0x170) [0xb514ef30]
[0xb5d0267e]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfb379]
[0xb5cfb213]
[0xb5cfb379]
[0xb5cfb14d]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfad37]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfb379]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Leo Giertz
In reply to this post by Paul Ramsey-2

Hi Emily!

The log file you attached doesn't really contain enough information since the
default settings in luciddb are a bit terse. Could you please set
net.sf.farrago.jdbc.level=FINER in your Trace.properties?

Hopefully the real problem will show up in the logfile then.

Thanks!

-L

Emily Gouge wrote:

> Thanks for pointing out the double-quote solution.
>
> I've attached the Trace.log file and I'll try it again on 0.7 when it is
> released and let you know
> if I continue to have problems.
>
> Thanks for your help.
>
> Emily
>
> John V. Sichi wrote:
> > Emily Gouge wrote:
> >> import foreign schema habc
> >> from server habc_link
> >> into habc_extraction_schema;
> >>
> >> no tables showed in the habc_extraction_schema. "habc" schema in our
> >> postgesql database has many tables, but there was no upper case "HABC"
> >> schema; so no tables/views were found. My work around for this was to
> >> create upper case schema and upper case views (with upper case column
> >> names) in Postgres. Is there a simpler way to do this?
> >
> > Yes, LucidDB supports the SQL standard for using double-quotes around
> > any identifier to preserve case, so:
> >
> > import foreign schema "habc"
> > from server habc_link
> > into habc_extraction_schema;
> >
> >> 2. The second, larger problem I had, is that the Lucid server crashes
> >> when I try to load in large amounts of data from our postgresql
> >> database. The script I used to load data and the resulting error
> >> message are listed below. The master_grid table I am trying to load
> >> from contains approx. 270,000,000 rows (and about 50 columns; approx
> >> 30G of data). If I make a subset of the table that is approximately 1
> >> million rows (and 5 columns) I can load that data fine. Any ideas on
> >> how to resolve this issue?
> >>
> >> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal
> >> v2.6.9]. Java Version: 1.6.0_01
> >
> > There have been a lot of bugfixes and enhancements (like support for
> > concurrent read/write) checked into Perforce since the 0.6.0 release in
> > January. The crash below looks like an error unwind problem which has
> > been fixed. This means there's probably some other earlier error logged
> > before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could
> > you mail the contents of that file to this list (or enough of the tail
> > to show what happened before the crash)? If we can figure out what's
> > causing the ealier error, you may be able to get past this without a new
> > version.
> >
> > If not, the latest code is stable enough to put out an 0.7 release
> > within a few days to see if that resolves the problem.
> >
> > (Note that as far as I know, most testing up until now has been on Java
> > 1.5.)
> >
> > JVS




Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Emily Gouge
Leo,

I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file and attached the new logfile.
  However I'm not sure it has any more information that the first one I sent.  I've attached both
the new log file and my Trace.properties file.

Thanks,
Emily




Leo Giertz wrote:

> Hi Emily!
>
> The log file you attached doesn't really contain enough information since the
> default settings in luciddb are a bit terse. Could you please set
> net.sf.farrago.jdbc.level=FINER in your Trace.properties?
>
> Hopefully the real problem will show up in the logfile then.
>
> Thanks!
>
> -L
>
> Emily Gouge wrote:
>> Thanks for pointing out the double-quote solution.
>>
>> I've attached the Trace.log file and I'll try it again on 0.7 when it is
>> released and let you know
>> if I continue to have problems.
>>
>> Thanks for your help.
>>
>> Emily
>>
>> John V. Sichi wrote:
>>> Emily Gouge wrote:
>>>> import foreign schema habc
>>>> from server habc_link
>>>> into habc_extraction_schema;
>>>>
>>>> no tables showed in the habc_extraction_schema. "habc" schema in our
>>>> postgesql database has many tables, but there was no upper case "HABC"
>>>> schema; so no tables/views were found. My work around for this was to
>>>> create upper case schema and upper case views (with upper case column
>>>> names) in Postgres. Is there a simpler way to do this?
>>> Yes, LucidDB supports the SQL standard for using double-quotes around
>>> any identifier to preserve case, so:
>>>
>>> import foreign schema "habc"
>>> from server habc_link
>>> into habc_extraction_schema;
>>>
>>>> 2. The second, larger problem I had, is that the Lucid server crashes
>>>> when I try to load in large amounts of data from our postgresql
>>>> database. The script I used to load data and the resulting error
>>>> message are listed below. The master_grid table I am trying to load
>>>> from contains approx. 270,000,000 rows (and about 50 columns; approx
>>>> 30G of data). If I make a subset of the table that is approximately 1
>>>> million rows (and 5 columns) I can load that data fine. Any ideas on
>>>> how to resolve this issue?
>>>>
>>>> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal
>>>> v2.6.9]. Java Version: 1.6.0_01
>>> There have been a lot of bugfixes and enhancements (like support for
>>> concurrent read/write) checked into Perforce since the 0.6.0 release in
>>> January. The crash below looks like an error unwind problem which has
>>> been fixed. This means there's probably some other earlier error logged
>>> before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could
>>> you mail the contents of that file to this list (or enough of the tail
>>> to show what happened before the crash)? If we can figure out what's
>>> causing the ealier error, you may be able to get past this without a new
>>> version.
>>>
>>> If not, the latest code is stable enough to put out an 0.7 release
>>> within a few days to see if that resolves the problem.
>>>
>>> (Note that as far as I know, most testing up until now has been on Java
>>> 1.5.)
>>>
>>> JVS
>
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> luciddb-users mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/luciddb-users

May 2, 2007 5:10:04 AM net.sf.farrago.db.FarragoDbSingleton pinReference
INFO: connect
May 2, 2007 5:10:04 AM net.sf.farrago.db.FarragoDatabase dumpTraceConfig
CONFIG: # Tracing configuration

handlers=java.util.logging.FileHandler
java.util.logging.FileHandler.append=true
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter

java.util.logging.FileHandler.pattern=/mnt/lucid/luciddb-0.6.0/trace/Trace.log

.level=CONFIG

net.sf.farrago.jdbc.level=FINER

May 2, 2007 5:10:07 AM net.sf.farrago.catalog.FarragoMdrReposImpl <init>
INFO: Catalog successfully loaded
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase <init>
CONFIG: java.class.path = /usr/java/jre1.6.0_01/lib/tools.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/jmi.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/jmiutils.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/mdrapi.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/mdrjdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/mof.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/nbmdr.jar:/mnt/lucid/luciddb-0.6.0/lib/mdrlibs/openide-util.jar:/mnt/lucid/luciddb-0.6.0/lib/janino.jar:/mnt/lucid/luciddb-0.6.0/lib/eigenbase-resgen.jar:/mnt/lucid/luciddb-0.6.0/lib/eigenbase-xom.jar:/mnt/lucid/luciddb-0.6.0/lib/openjava.jar:/mnt/lucid/luciddb-0.6.0/lib/RmiJdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/csvjdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/sqlline.jar:/mnt/lucid/luciddb-0.6.0/lib/jline.jar:/mnt/lucid/luciddb-0.6.0/lib/jgrapht-jdk1.4.jar:/mnt/lucid/luciddb-0.6.0/lib/jgrapht-jdk1.5.jar:/mnt/lucid/luciddb-0.6.0/lib/jgrapht7-jdk1.5.jar:/mnt/lucid/luciddb-0.6.0/lib/hsqldb.jar:/mnt/lucid/luciddb-0.6.0/lib/postgresql-8.1-406.jdbc2.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-transaction-1.1.jar:/mnt/lucid/luciddb-0.6.0/lib/vjdbc.jar:/mnt/lucid/luciddb-0.6.0/lib/vjdbc_server.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-logging.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-pool-1.2.jar:/mnt/lucid/luciddb-0.6.0/lib/commons-dbcp-1.2.1.jar:/mnt/lucid/luciddb-0.6.0/lib/farrago.jar
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase <init>
CONFIG: java.library.path = /usr/java/jre1.6.0_01/lib/i386/client:/usr/java/jre1.6.0_01/lib/i386:/usr/java/jre1.6.0_01/../lib/i386:/mnt/lucid/luciddb-0.6.0/lib/fennel:/usr/java/packages/lib/i386:/lib:/usr/lib
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cachePageSize=32768
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cachePagesInit=5000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cachePagesMax=5000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter cacheReservePercentage=5
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseIncrementSize=1000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseInitSize=1000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseMaxSize=0
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseShadowLogIncrementSize=1000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseShadowLogInitSize=2000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseTxnLogIncrementSize=1000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseTxnLogInitSize=2000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter expectedConcurrentStatements=4
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter forceTxns=true
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter groupCommitInterval=0
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter jniHandleTraceFile=
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter resourceDir=/mnt/lucid/luciddb-0.6.0/catalog/fennel
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter tempIncrementSize=1000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter tempInitSize=1000
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter tempMaxSize=0
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel parameter databaseDir=/mnt/lucid/luciddb-0.6.0/catalog
May 2, 2007 5:10:07 AM net.sf.fennel.database <native>
WARNING: recovery required
May 2, 2007 5:10:07 AM net.sf.fennel.database <native>
INFO: recovery beginning; page version = 325
May 2, 2007 5:10:07 AM net.sf.fennel.database <native>
INFO: recovery completed
May 2, 2007 5:10:07 AM net.sf.fennel.database <native>
INFO: opening database; process ID = 6034
May 2, 2007 5:10:07 AM net.sf.fennel.database <native>
INFO: online UUID = 702d260a-deb7-4216-baac-0b4b5a509f64
May 2, 2007 5:10:07 AM net.sf.fennel.database <native>
INFO: database opened; page version = 326
May 2, 2007 5:10:07 AM net.sf.farrago.db.FarragoDatabase loadFennel
CONFIG: Fennel successfully loaded
May 2, 2007 5:10:07 AM de.simplicit.vjdbc.server.rmi.ConnectionServer serve
INFO: Starting RMI-Registry on port 5434
May 2, 2007 5:10:07 AM de.simplicit.vjdbc.server.rmi.ConnectionServer serve
INFO: Binding remote object to 'VJdbc'
May 2, 2007 5:10:10 AM net.sf.farrago.db.FarragoDbSingleton pinReference
INFO: connect
May 2, 2007 5:10:10 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: drop server habc_link cascade
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: drop schema habc cascade
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: drop schema habc_transformation_schema cascade
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: create server habc_link
foreign data wrapper sys_jdbc
options(
  driver_class 'org.postgresql.Driver',
  url 'jdbc:postgresql://turtle:7654/habc',
  user_name 'egouge'
)
May 2, 2007 5:10:19 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: create schema habc_transformation_schema
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: import foreign schema habc
from server habc_link
into habc_extraction_schema
May 2, 2007 5:10:20 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: create view habc_transformation_schema.location_view as
select x,y
from habc_extraction_schema.master_grid
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession prepare
INFO:
select x,y
from habc_extraction_schema.master_grid
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: create schema habc
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: create table habc.location_dimension(
    loc_key int generated always as identity not null primary key,
    x integer not null,
    y integer not null,
    unique(x,y)
)
May 2, 2007 5:10:21 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:22 AM net.sf.farrago.db.FarragoDbSession commitImpl
INFO: commit
May 2, 2007 5:10:22 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:22 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:23 AM net.sf.farrago.db.FarragoDbStmtContext cancel
INFO: cancel
May 2, 2007 5:10:23 AM net.sf.farrago.db.FarragoDbSession prepare
INFO: insert into habc.location_dimension (x,y) select x,y from habc_transformation_schema.location_view
May 2, 2007 5:10:59 AM org.eigenbase.util.EigenbaseException <init>
SEVERE: org.eigenbase.util.EigenbaseException: Failed to access data server for execution
May 2, 2007 5:10:59 AM net.sf.fennel.backtrace <native>
SEVERE: *** CAUGHT SIGNAL 6; BACKTRACE:
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x37) [0x179f7]
/lib/tls/libpthread.so.0 [0xa01898]
/lib/ld-linux.so.2 [0x7227a2]
/lib/tls/libc.so.6(gsignal+0x55) [0x7677a5]
/lib/tls/libc.so.6(abort+0xe9) [0x769209]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x630358b]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x63ae3c1]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so(JVM_handle_linux_signal+0x1f0) [0x63079c0]
/usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x6305278]
/lib/tls/libpthread.so.0 [0xa01890]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_lu_colstore.so(fennel::LbmSplicerExecStream::closeImpl()+0x28) [0x62718]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_exec.so(fennel::ExecStreamGraphImpl::closeImpl()+0x26b) [0x51d4b]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e]
/mnt/lucid/luciddb-0.6.0/lib/fennel/libfarrago.so(Java_net_sf_farrago_fennel_FennelStorage_tupleStreamGraphClose+0x170) [0xb4f00f30]
[0xb5d0267e]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfae9d]
[0xb5cfb379]
[0xb5cfb213]
[0xb5cfb379]
[0xb5cfb14d]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfad37]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfb213]
[0xb5cfad37]
[0xb5cfb379]


# Tracing configuration

handlers=java.util.logging.FileHandler
java.util.logging.FileHandler.append=true
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter

java.util.logging.FileHandler.pattern=/mnt/lucid/luciddb-0.6.0/trace/Trace.log

.level=CONFIG

net.sf.farrago.jdbc.level=FINER
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

John Sichi
Administrator
Emily Gouge wrote:
> I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file
> and attached the new logfile.  However I'm not sure it has any more
> information that the first one I sent.  I've attached both the new log
> file and my Trace.properties file.

Hmmm...the trace has this in the log just before the crash:

SEVERE: org.eigenbase.util.EigenbaseException: Failed to access data
server for execution

Usually that means there was some problem when LucidDB calls the foreign
server's JDBC driver to prepare and execute the query, but for some
reason the underlying exception isn't being traced.

Instead of the insert statement, can you try just a query:

select count(*) from habc_extraction_schema.master_grid

This will attempt to pull back all the rows from the PostgreSQL server
and count them.

JVS


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Emily Gouge
The select query results in a Java Out of Memory Error:

0: jdbc:luciddb:rmi://localhost> select count(*) from habc_extraction_schema.master_grid;

Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0)


I can however run counts and extract from other tables/views with fewer records:

0: jdbc:luciddb:rmi://localhost> select count(*) from habc_extraction_schema.lwdpbc;
+---------+
| EXPR$0  |
+---------+
| 19249   |
+---------+
1 row selected (7.179 seconds)

Emily

John V. Sichi wrote:

> Emily Gouge wrote:
>> I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file
>> and attached the new logfile.  However I'm not sure it has any more
>> information that the first one I sent.  I've attached both the new log
>> file and my Trace.properties file.
>
> Hmmm...the trace has this in the log just before the crash:
>
> SEVERE: org.eigenbase.util.EigenbaseException: Failed to access data
> server for execution
>
> Usually that means there was some problem when LucidDB calls the foreign
> server's JDBC driver to prepare and execute the query, but for some
> reason the underlying exception isn't being traced.
>
> Instead of the insert statement, can you try just a query:
>
> select count(*) from habc_extraction_schema.master_grid
>
> This will attempt to pull back all the rows from the PostgreSQL server
> and count them.
>
> JVS



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

John Sichi
Administrator
Emily Gouge wrote:
> The select query results in a Java Out of Memory Error:
>
> 0: jdbc:luciddb:rmi://localhost> select count(*) from
> habc_extraction_schema.master_grid;
>
> Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0)

Ah, I wonder if it could have anything to do with this?

http://mail-archives.apache.org/mod_mbox/db-ojb-user/200504.mbox/%3C425E9353.3000500@...%3E
http://postgis.refractions.net/pipermail/postgis-users/2005-August/008875.html

We may need to add something to the JDBC foreign data wrapper to allow
control over the fetch size to prevent the PostgreSQL JDBC driver from
effectively leaking per-row.  Sigh.

As a workaround, you could try loading the data in large chunks of rows
via a WHERE clause on some partitioning key (if there is one in the
source data).

Another clunky alternative is to dump the data from PostgreSQL into a
csv file and load via LucidDB's flatfile reader.  There have recently
been some problem reports about trying to loading the TPC-H 10gig
dataset via flatfiles due to a bug in the flatfile reader causing it to
go into an infinite loop, so it depends whether you're attempting to
load your full data set or a smaller test set.

JVS


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Emily Gouge
Interesting.  I'll try some of the workaround ideas; hopefully I'll have some success.

Thanks for all your help!
Emily


John V. Sichi wrote:

> Emily Gouge wrote:
>> The select query results in a Java Out of Memory Error:
>>
>> 0: jdbc:luciddb:rmi://localhost> select count(*) from
>> habc_extraction_schema.master_grid;
>>
>> Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0)
>
> Ah, I wonder if it could have anything to do with this?
>
> http://mail-archives.apache.org/mod_mbox/db-ojb-user/200504.mbox/%3C425E9353.3000500@...%3E 
>
> http://postgis.refractions.net/pipermail/postgis-users/2005-August/008875.html 
>
>
> We may need to add something to the JDBC foreign data wrapper to allow
> control over the fetch size to prevent the PostgreSQL JDBC driver from
> effectively leaking per-row.  Sigh.
>
> As a workaround, you could try loading the data in large chunks of rows
> via a WHERE clause on some partitioning key (if there is one in the
> source data).
>
> Another clunky alternative is to dump the data from PostgreSQL into a
> csv file and load via LucidDB's flatfile reader.  There have recently
> been some problem reports about trying to loading the TPC-H 10gig
> dataset via flatfiles due to a bug in the flatfile reader causing it to
> go into an infinite loop, so it depends whether you're attempting to
> load your full data set or a smaller test set.
>
> JVS



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Emily Gouge
In reply to this post by John Sichi

 > As a workaround, you could try loading the data in large chunks of rows
 > via a WHERE clause on some partitioning key (if there is one in the
 > source data).

I tried this, however I am still getting Java heap space errors.  This query should return only one row.

select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from
habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0;

causes:
Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0)


I have noticed that adding the where clause to the query does not change the query being run on the
postgresql database.  Both cases cause a "SELECT * FROM "habc"."master_grid"" query to be run on the
postgresql database with no where clause.

Any ideas on how the query:
select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from
habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0;

is being converted to a:
select * from "habc"."master_grid"

Thanks again.








Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Rushan Chen
Hi Emily,

The projection(select list items) and filters(where clause) are not
pushed through the JDBC. That's why the SQL showing up on the postgresql
server has no where clause and selects every column.

Is it possible to create views on the postgresql to divide the original
big table into smaller chunks and load them into LucidDb? In your
script, the definition of habc_transformation_schema.location_view will
have to be UNIONs of these source views.

Hope this helps.

Rushan

Emily Gouge wrote:

>  > As a workaround, you could try loading the data in large chunks of rows
>  > via a WHERE clause on some partitioning key (if there is one in the
>  > source data).
>
> I tried this, however I am still getting Java heap space errors.  This query should return only one row.
>
> select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from
> habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0;
>
> causes:
> Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0)
>
>
> I have noticed that adding the where clause to the query does not change the query being run on the
> postgresql database.  Both cases cause a "SELECT * FROM "habc"."master_grid"" query to be run on the
> postgresql database with no where clause.
>
> Any ideas on how the query:
> select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from
> habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0;
>
> is being converted to a:
> select * from "habc"."master_grid"
>
> Thanks again.
>
>
>
>
>
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> luciddb-users mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/luciddb-users
>  



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Paul Ramsey-2
Rather than continuing in this increasingly messy direction, how  
about a different question:

If you had to load 200M rows of data 50 columns wide into LucidDB  
from source A, what would your ideal source A be?

P

On 3-May-07, at 2:40 PM, Rushan Chen wrote:

> Hi Emily,
>
> The projection(select list items) and filters(where clause) are not
> pushed through the JDBC. That's why the SQL showing up on the  
> postgresql
> server has no where clause and selects every column.
>
> Is it possible to create views on the postgresql to divide the  
> original
> big table into smaller chunks and load them into LucidDb? In your
> script, the definition of habc_transformation_schema.location_view  
> will
> have to be UNIONs of these source views.
>
> Hope this helps.
>
> Rushan
>
> Emily Gouge wrote:
>>> As a workaround, you could try loading the data in large chunks  
>>> of rows
>>> via a WHERE clause on some partitioning key (if there is one in the
>>> source data).
>>
>> I tried this, however I am still getting Java heap space errors.  
>> This query should return only one row.
>>
>> select "x","y", "ecosec_v2_code", "lwdpbc_code",  
>> "bececolwd_v2_code", "dra_code" from
>> habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0;
>>
>> causes:
>> Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0)
>>
>>
>> I have noticed that adding the where clause to the query does not  
>> change the query being run on the
>> postgresql database.  Both cases cause a "SELECT * FROM  
>> "habc"."master_grid"" query to be run on the
>> postgresql database with no where clause.
>>
>> Any ideas on how the query:
>> select "x","y", "ecosec_v2_code", "lwdpbc_code",  
>> "bececolwd_v2_code", "dra_code" from
>> habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0;
>>
>> is being converted to a:
>> select * from "habc"."master_grid"
>>
>> Thanks again.
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> ----
>> This SF.net email is sponsored by DB2 Express
>> Download DB2 Express C - the FREE version of DB2 express and take
>> control of your XML. No limits. Just data. Click to get it now.
>> http://sourceforge.net/powerbar/db2/
>> _______________________________________________
>> luciddb-users mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/luciddb-users
>>
>
>
> ----------------------------------------------------------------------
> ---
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> luciddb-users mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/luciddb-users



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

John Sichi
Administrator
Paul Ramsey wrote:
> Rather than continuing in this increasingly messy direction, how  
> about a different question:
>
> If you had to load 200M rows of data 50 columns wide into LucidDB  
> from source A, what would your ideal source A be?

A = any DBMS with a JDBC driver that's been tested with LucidDB already.
  The ones I know of for sure in that category are the Oracle thin
driver and the jTDS open-source driver for SQL Server.  It's currently
necessary to add the corresponding driver to bin/classpath.gen because
LucidDB doesn't yet support the SQL:2003 DDL for declarative jar
dependencies.  (Looks like the PostgreSQL driver got packaged on there
by accident, which is why you didn't have to do anything special.)

It shouldn't be hard to enhance the JDBC foreign data wrapper to allow
it to get past the PostgreSQL driver limitation.  I've logged an
enhancement request for it, and will make sure it gets into the 0.7 release:

http://issues.eigenbase.org/browse/FRG-267

(Sorry for the unworkable suggestion about the WHERE clauses; the
necessary filter pushdown optimization hasn't been released yet.)

JVS


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

Paul Ramsey-2
Thanks John, we'll look forward to the 0.7 release and give it a try.
P

> John V. Sichi wrote:
>> It shouldn't be hard to enhance the JDBC foreign data wrapper to allow
>> it to get past the PostgreSQL driver limitation.  I've logged an
>> enhancement request for it, and will make sure it gets into the 0.7
>> release:
       

--

   Paul Ramsey
   Refractions Research
   http://www.refractions.net
   [hidden email]
   Phone: 250-383-3022
   Cell: 250-885-0632


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

John Sichi
Administrator
Paul Ramsey wrote:
> Thanks John, we'll look forward to the 0.7 release and give it a try.
> P
>
>> John V. Sichi wrote:
>>> It shouldn't be hard to enhance the JDBC foreign data wrapper to allow
>>> it to get past the PostgreSQL driver limitation.  I've logged an
>>> enhancement request for it, and will make sure it gets into the 0.7
>>> release:

Linux binaries for 0.7 prerelease are now available:

http://downloads.sf.net/luciddb/luciddb-bin-linux-0.7.0-pre1.tar.bz2

See README file for changes from 0.6.  Release notes mention that
upgrade isn't supported yet, so be sure to install in a fresh location.

Use new options fetch_size and autocommit to avoid the PostgreSQL driver
memory problem.  I tested by loading 8 million rows into a PostgreSQL
server and reducing the JVM heap limit for LucidDB.  With the default
option settings, I could reproduce the OutOfMemory error via select
count(*); with the new option settings, I was able to run the same query
successfully.  Here's Emily's example modified to use the new settings:

create server habc_link
foreign data wrapper sys_jdbc
options(
   driver_class 'org.postgresql.Driver',
   url 'jdbc:postgresql://turtle:7654/habc',
   user_name 'egouge',
   fetch_size '10000',
   autocommit 'false'
);

I didn't test for what the ideal fetch_size should be; too small would
probably hurt extraction speed.

JVS


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: LucidDB for Spatial OLAP

John Sichi
Administrator
John V. Sichi wrote:
> Linux binaries for 0.7 prerelease are now available:
>
> http://downloads.sf.net/luciddb/luciddb-bin-linux-0.7.0-pre1.tar.bz2

Update on this:  testing against the TPC-H 10 gigabyte dataset has
turned up some bugs with datafile sizes beyond 4 gigabytes.  The fixes
for these will be included as part of the official 0.7 release.  Until
then, attempting to load more than 4 gigabytes of data is not recommended.

JVS


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Connection to Lucid through SQuirreL SQL Client

Emily Gouge
In reply to this post by Emily Gouge
I'm hoping somebody can point me in the right direction as I'm having problems connecting to a lucid
database server through the SQuirreL SQL client.

I've downloaded and setup Squirrel.  I have connected to a Postgresql database without any issues.
When I try to connect to the lucid database server I get the following error.  I have seen
http://www.eigenbase.org/wiki/index.php/ClientServerLocalhost and tried modifying the hosts file and
rebooting the machine without any success.  Can anybody provide me with any insight into how I can
get around this issue?

Thanks,
Emily

Connection URL:
jdbc:luciddb:rmi://lucidserver:5434/

Error:
Lucid-HaBC: java.rmi.NotBoundException: /VJdbc
        at sun.rmi.registry.RegistryImpl.lookup(Unknown Source)
        at sun.rmi.registry.RegistryImpl_Skel.dispatch(Unknown Source)
        at sun.rmi.server.UnicastServerRef.oldDispatch(Unknown Source)
        at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
        at sun.rmi.transport.Transport$1.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.Transport.serviceCall(Unknown Source)
        at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
        at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)
        at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:359)
        at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
        at java.rmi.Naming.lookup(Naming.java:84)
        at de.simplicit.vjdbc.VirtualDriver.createRmiCommandSink(VirtualDriver.java:182)
        at de.simplicit.vjdbc.VirtualDriver.connect(VirtualDriver.java:110)
        at
net.sf.farrago.jdbc.client.FarragoUnregisteredVjdbcClientDriver.connect(FarragoUnregisteredVjdbcClientDriver.java:97)
        at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
        at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.execute(OpenConnectionCommand.java:97)
        at
net.sourceforge.squirrel_sql.client.mainframe.action.ConnectToAliasCommand$SheetHandler.run(ConnectToAliasCommand.java:283)
        at net.sourceforge.squirrel_sql.fw.util.TaskExecuter.run(TaskExecuter.java:82)
        at java.lang.Thread.run(Thread.java:619)



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Connection to Lucid through SQuirreL SQL Client

John Sichi
Administrator
Hi Emily,

You have a trailing slash on the URL.  Try it without:

jdbc:luciddb:rmi://lucidserver:5434

I tried it with a trailing slash and was able to reproduce the exact
error message you got, so I'm pretty sure this is what's giving you trouble.

Eigenbase URL conventions are documented here:

http://docs.eigenbase.org/JdbcUrlConventions

JVS

Emily Gouge wrote:

> I'm hoping somebody can point me in the right direction as I'm having problems connecting to a lucid
> database server through the SQuirreL SQL client.
>
> I've downloaded and setup Squirrel.  I have connected to a Postgresql database without any issues.
> When I try to connect to the lucid database server I get the following error.  I have seen
> http://www.eigenbase.org/wiki/index.php/ClientServerLocalhost and tried modifying the hosts file and
> rebooting the machine without any success.  Can anybody provide me with any insight into how I can
> get around this issue?
>
> Thanks,
> Emily
>
> Connection URL:
> jdbc:luciddb:rmi://lucidserver:5434/
>
> Error:
> Lucid-HaBC: java.rmi.NotBoundException: /VJdbc
> at sun.rmi.registry.RegistryImpl.lookup(Unknown Source)
> at sun.rmi.registry.RegistryImpl_Skel.dispatch(Unknown Source)
> at sun.rmi.server.UnicastServerRef.oldDispatch(Unknown Source)
> at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
> at sun.rmi.transport.Transport$1.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)
> at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)
> at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:359)
> at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
> at java.rmi.Naming.lookup(Naming.java:84)
> at de.simplicit.vjdbc.VirtualDriver.createRmiCommandSink(VirtualDriver.java:182)
> at de.simplicit.vjdbc.VirtualDriver.connect(VirtualDriver.java:110)
> at
> net.sf.farrago.jdbc.client.FarragoUnregisteredVjdbcClientDriver.connect(FarragoUnregisteredVjdbcClientDriver.java:97)
> at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
> at
> net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.execute(OpenConnectionCommand.java:97)
> at
> net.sourceforge.squirrel_sql.client.mainframe.action.ConnectToAliasCommand$SheetHandler.run(ConnectToAliasCommand.java:283)
> at net.sourceforge.squirrel_sql.fw.util.TaskExecuter.run(TaskExecuter.java:82)
> at java.lang.Thread.run(Thread.java:619)
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> luciddb-users mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/luciddb-users
>



Loading...