Last updated March 1st,
1999
Copyright © 1998-1999 Thomas Althammer.
All rights reserved.
No part of this document should be reproduced, distributed or altered without
my permission.
Fast Facts - If you are using SQLBase on Novell platforms, take a look at the NLM Load Sequence hints. - Unsure what this error means? Take a peek at Error Messages! - Want to know the differences between Oracle and SQLBase? Click here! - Too many timeouts? Check here. |
General
SQLConsole
SQLTalk
Quest
General
Oracle
Microsoft SQLServer
Sybase
JDBC
ODBC
Where can I find information about Centura's
database products and the Year 2000 problem?
The newest releases are all Year 2000 compliant. Check
www.centurasoft.com/support/tech_info/bulletins/cli2000.htm
for further information.
Even though SQLBase is Y2K compliant, your client-side applications that access
SQLBase may not be. Use a utility available for download at
ftp.centurasoft.com/products/utilities/y2ksqlb.zip to analyze
your SQLBase databases to help with your Y2K efforts. Please read the
ReadMe.txt and License.txt files after extracting them from the .ZIP file.
The topic "Behaviour of the @YEAR( )
function" might be of interest as well.
What does "QBE" stand
for?
In database management systems, query by example (QBE) refers to a
method of forming queries in which the database program displays a blank record
with a space for each field. You can then enter conditions for each field that
you want to be included in the query. For example, if you wanted to find all
records where the AGE field is equal to 42, you would enter 42 in the AGE field
and execute the query.
That way, users could easily search for detail information in dialogs used for
entering data.
When I attempt to run the SETUP.EXE of a Centura
product I get 'Installation Aborted", "Internal Error". How can
I avoid this?
The Wise installer makes extensive use of the TEMP directory during any
installation. Make sure that:
- The directory pointed to by the TEMP (or TMP) environment variable
exists.
- Make sure that this directory is writable by the current user.
Sadly, there's no way to code around this in the installer, since the error
occurs before the first line of code is executed.
I was running SQLBase 6.0.1 with a PTF
and upgraded to the 6.1.0 production release, but a bug that was fixed in the
PTF is back! Why?
You have to understand the release cycle. While version 6.1 is being
worked on, PTFs are being created for 6.0.1 for bugs found in the code. These
fixes are also incorporated into the 6.1 code. At some point in time, the 6.1
code is frozen and goes to production. However, PTFs are still created for the
6.0.1 version which won't make it into the 6.1 production release. These fixes
are then rolled into the first (or second) PTF release for 6.1. So, if
something was fixed in PTF7 of 6.0.1 and the code for 6.1 was frozen at PTF5,
there would appear to be a regression if you went from PTF7 to 6.1. That's not
really the case, because it was never in the code for 6.1 to begin with.
How many users/processes are supported by
an unlimited SQLBase server?
There is a limit of 99 cursors per process and 800 processes per
server. People have seen 115 concurrent users with 245 processes and 625
cursors connected to one server. The performance greatly depends upon the
application and the database schema in these areas.
Is it possible to change the user name
appearing on the SQLBase-Server screen programmatically?
It can be done with the constant SQLPCLN in conjunction with SQLSET
[SqlSetParameterAll( )]. This is useful when only one SQL.INI is used for a
group of workstations in the network.
Does SQLBase support dual
processors?
That depends on what is meant by "support" of dual processors.
Centura cannot run with threads and/or processes doled out to mulitple
processors. It will run on multi-processor machines using whatever processor
the OS assigns it to, though.
SQLBase 6.1.2 and later run quite happily on multi-processor machines using any
processor the OS ties it to.
Let's take a moment and define a few conditions. Since SQLBase (and a
great many other SMP compatible applications) cannot be spread across
processors, any "performance improvements" from having multiple
processors comes from the OS doing load balancing. For SQLBase this means that
the RDBMS can run on one processor while the OS uses a different one for, say,
disk I/O. Performance will be improved since the RDBMS will not have to
wait for the disks (and vice versa) while doing memory intensive stuff.
Also, if there are other applications running in the same box, they can be
load-balanced onto
other processor(s).
So what this boils down to is that if SQLBase is the ONLY application/service
on the machine, the performance improvement from adding a processor will be
less than if it is one of multiple apps.
SQLBase 7.0 is not subdividable across processors.
How do I hide the icon/display of a
multi-user SQLBase server?
This can be accomplished by adding the keyword "SILENTMODE=1"
to any server section of the SQL.INI.
Is there a specific order in which the
COMDLL entries should be listed?
No, in general there is no prescribed order. If you are using the wildcard
"*" with SQLWSOCK or SQLWS32, make sure that this entry is the last
one. Otherwise, SQLBase might ignore subsequent entries.
When an application attempts to connect to a database, it goes down the list of
COMDLL entries, starting from the top, until it is successful in establishing a
connection to the named database. For example, when the Oracle router COMDLL is
listed first, it would load the Oracle router and the Oracle router would see
if there is a REMOTEDBNAME entry for the specified database, and if so, attempt
to connect to it. Otherwise it moves down the list to the next COMDLL
entry.
What are deadlocks and how do I avoid
them?
Check out
http://www.centurasoft.com/support/tech_info/knowledge_base/sbsinfo/timout.pdf
Using VB5, we upgraded to 32-bit. Now our
VB5 programs does not work correctly with SQLWNTM.DLL anymore. How can I fix
that?
You need to use sqlssb() using another technique. The file VB5SQL.ZIP is an
example of how to do that. It is available in the
Downloads section.
Is it possible to assign different
priorities to database processes?
No, it's not possible. However, as an alternative you might be able to get
around this by creating two databases, one for your background/reporting and
the other for daily business. Just take a copy of the database each night
(restore a snapshot as a different named database). If you have data that gets
updated by the background process, that could be easily replicated into the
main database on a regular basis using SQLBase Exchange.
Is it possible to connect to a database
when the database is on a CD-ROM?
Yes, it is. Just use the READONLY=1 setting in the SQL.INI (all databases on
such a server are readonly in this scenario).
My database crashed and I
dont have a backup with any log files. How can I connect to the
database?
Try this procedure as a last resort.
Let's consider that the database is called MIKE.DBS. Move the MIKE.BKP file to
the database area and rename it to MIKE.DBS. Make sure there are no log files
there.
1. Using SQLTalk database do the following:
A. ROLLFORWARD MIKE TO BACKUP;
<<error message returned. Disregard. >>
B. ROLLFORWARD MIKE TO BACKUP; <<Yes, do it
again!>>
<<Log file x.LOG not found>> message
returned.
C. ROLLFORWARD MIKE END;
<<Rollforward completed>> message returned.
D. CONNECT MIKE 1 username/password;
<<Connected to MIKE>> message returned.
2. Do a check database, etc. to find out the status of your database.
Should I use partitioning to speed
up disk access? What about striping?
People have found out after extensive performance testing
that they got the best performance by using disk striping *without*
partitioning the database and letting the OS decide where to place the db and
logs. This also solves the DBA's concerns about complexity of backup and
restore (MAIN.DBS, etc.) and having to manage the disk space. Since an
unpartitioned database and logs grow dynamically you don't have to be concerned
about running out of disk space in the defined partitions.
It is generally not recommend to use partitioning unless the database is going
to be bigger then 2 GByte. t's just not worth the extra hassle.
I have seen that it helps
performance to set the logfilesize parameter. How do I set this?
The logfilesize parameter is set individually on each database using
SQLTalk. Format is 'SET LOGFILESIZE xxx;' where xxx is the logfile
size in kilobytes, ie. 1000 would give a logfile of 1 MByte. Once set, it will
stay this size until an unload/load is done.
The size of the logfiles is not dependent on the size of the database, but on
the amount of log activity that you have. If you're creating 50-60 log files a
day and your log files are 1 MByte, you might look to setting the logfilesize
to 10 MByte. You must also think about how often a backup is done. When you do
a backup, the log file is 'rolled over' to a new log, regardless of how much of
the current log is used.
Other things to look for to increase performance is the setting for logfile
preallocation which builds the log file at the full size on creation and the
extent size for the database, which indicates how much the database will be
extended when it runs out of room. This value is typically set woefully low.
With a large database with lots of growth, you might want it to expand by 10-20
MByte each time rather then by 100kByte.
I have heaps of log files. Why
doesnt SQLBase get rid of them?
This snippet is from the BKPTIP.PDF file in the SQLBase library:
----------------------------------------------------------------------------
What are log files, why do I need them and why won't they go away?!?
SQLBase uses transaction log files to hold before and after images of changes
to the database and well as periodic checkpoints for transaction control. They
are used for crash recovery (automatically done after a power failure, server
crash, etc.), rollback (when changes made to the database are not required or a
failure causes SQLBase to roll back a transaction) and data recovery (restores
of databases).
Note well that a database consists of the .DBS file and these log files. If the
log files are deleted, the database is made useless! By default, SQLBase
automatically deletes log files whenever they are backed up or when they are
not needed for crash recovery or rollback. This behaviour may be changed by
setting LOGBACKUP ON for a database using SQLTalk. This will specify that the
log files are to be kept until they are backed up by a 'backup logs' command.
This is the only way that these logs can be deleted! This option must be turned
on if you are doing a 'backup database' or 'backup logs'. Conversely, there is
no need to set this on unless you intend to use 'backup logs' and incorporate a
'rollforward' capability into your recovery strategy, ie. you will only do
'snapshot's.
You can specify the maximum size of the log files to be created for each
database by setting the LOGFILESIZE using SQLTalk (default is 1 megabyte). Logs
are created at a minimum size and grow to the limit. They can be pre-built at
the maximum size specified by setting the LOGFILEPREALLOC on for the database
using SQLTalk (default is off). Some ramifications are that if you create a lot
of log files, you can set the maximum size of the logs to be large (say 5 MB)
and pre-allocate the file size. This will result in less I/O required to build
new files and expand the existing file.
Log files will be only be deleted if eligible for deletion whenever a 'release
log' or 'backup logs' is done if they are not 'pinned'. A log file is pinned:
1. By a current transaction logged to the log file. Can only be unpinned by a
commit or rollback by that transaction.
2. If a last-but-one checkpoint was done using the log file or a previous log
file. For example, if a checkpoint was done in 6.log then that log and all
subsequent logs will be pinned. This can be unpinned by doing a 'release
log'.
3. If LOGBACKUP is on and the log file has not been backed up. Only a 'backup
logs' will unpin this. Doing a 'backup snapshot' will have no effect on the
pinning of this log file.
----------------------------------------------------------------------------
If you're doing a backup snapshot, the 'release log' will happen automatically
before the log files are backed up. If you are getting excessive log files in
the log area, that means that a transaction has the earlier log files 'pinned'
or you have logbackup set on.
I dont understand the
difference of doing an ON SERVER or ON CLIENT
backup. Cant I just specify a mapped drive to the network and
do an ON CLIENT backup?
When doing an ON SERVER backup, the data is
copied directly from the database area to the backup area on the server. When
you do a ON CLIENT backup, all data is passed through the
client before going to whatever destination drive you specify. If youre
using a Novell server, a Novell path is used and not a mapped drive,
ie. server1:sys\backups rather than F:\backups. If using an
NT server, you can specify a drive letter, but this drive letter is from the
server's point of view, regardless of what drive letters the client
physically has or any drive letters mapped on the client for the server.
If you want a better understanding of the whole backup/recovery process,
download BKPTIP.ZIP.
I was trying to unload a database
and I got a read or write failure (#2104) error.
This error typically happens if you have run out of disk space or you
don't have write capability on the directory where the unload file is being
put. Did this error happen immediately or after unloading for a while? If
immediately, check your access. If after a while, check your disk space.
Make sure you check the disk space before you bring the server back
up. Specifically check in the temporary file area specified in the
servers SQL.INI file ('TEMPDIR=').
I would like to restore a snapshot
made from one database to a another database with a different name. How do I do
this?
Simply go to the area where the backup exists and rename the .BKP file
to be the new name you want the database to have. Then do a 'RESTORE
SNAPSHOT FROM backuppath TO newdbname;'. Use the 'on server'
option if applicable. Don't forget to rename the .BKP file back to the original
name.
Is there a command to unload just
the schema of a database, without the data? If so, can I do it for just a table
versus the whole database?
Using SQLTalk, type 'UNLOAD SCHEMA filename;'. The entire
schema for the whole database will be unloaded. You can't do this for just a
single table. However, modifying the script file created is really easy.
Im trying to bring up by
SQLBase server but am getting a 441 error.
This typically occurs when bringing up the server because the server's SQL.INI
file contains a DBNAME=statement for a database that no longer exists. This
could happen because someone deleted the file manually rather then doing a '
DROP DATABASE' or if a file error occurred during a 'DROP DATABASE
' and the SQL.INI file never got updated.
The solution is to compare all DBNAME statements in the SQL.INI file with what
really exists on the server and remove the offending line. The server should
now come up correctly.
Should I use DBAgent to backup my
databases live?
Here's an official statement from Technical Support:
"Centura Technical Support strongly recommends against using DBAgent 1.0
to back up databases directly to tape. There have been several cases where
there were severe problems with this process resulting in corrupt databases. If
you want to use the ArcServe product, it is recommended that a snapshot of each
database is done to a server volume, then the volume is backed up using the
normal backup facilities of ArcServe. No tests have been done with the
newly-released DBAgent 1.5 (for SQLBase 5.x) or DBAgent 2.0 (for SQLBase
6.x)."
I did an unload of a TeamWindows
database and the unload file was heaps bigger then the database itself. I
thought the unload file was supposed to be smaller!
When the database is unloaded, the modules that are compressed are uncompressed
into the script file. Depending on the types of files you're storing (programs,
documentation, etc.) you could expect that the unload script file will be
at least twice the size of the actual database. Plan the unload disk
space accordingly.
I tried to back up my databases,
but ArcServe says the files are open.
The first connection to a SQLBase database flags the file as open. It
will not be closed until the SQLBase engine is shut down or the database is
deinstalled. Centura provides methods of backing up your database while users
are accessing it. They are all specific to the type of recovery you
may need from your backups, ie. do you need to restore your database from last
night's backup (possibly losing all of today's work) or do you need to restore
to ten minutes ago. There are strategies for both these requirements. All of
these options require additional disk space to copy the database and log files
then the backup software (ie. ArcServe) backs up the copy to tape.
To get an indepth (but hopefully understandable) description of these backup
options, download BKPTIP.ZIP.
My database crashed with recovery
off. What do I do now?
Try this procedure as a last resort:
Let's consider that the database is called MIKE.DBS.
1. Using SQLTalk database do the following:
A. ROLLFORWARD MIKE TO BACKUP;
<<error message returned. Disregard. >>
B. ROLLFORWARD MIKE TO BACKUP; <<Yes, do it
again!>>
<<Log file x.LOG not found>> message
returned.
C. ROLLFORWARD MIKE END;
<<Rollforward completed>> message returned.
D. CONNECT MIKE 1 username/password;
<<Connected to MIKE>> message returned.
2. Do a check database, etc. to find out the status of your database. Keep in
mind that if recovery was off when the crash occurred that all transactions
will have been committed, ie. if you were deleting 100,000 rows and you had a
crash after deleting 50,000 of them, those 50,000 are gone when the database
comes back up.
Is there a simple test that can
determine if performance gains will result from reorganizing my database?
Yes, the best way to get a feel for database fragmentation is to query the
system catalog for tables that have extent pages. Any table with extent pages
has the potential to increase the I/O requirements of your server. The
following query will tell you which tables, if any, have extent pages:
SELECT name, extentpagecount FROM sysadm.systables WHERE
extentpagecount>0 AND system='N' ORDER BY 2 DESC;
I have a corrupted database, what
should I do?
There is a very detailed description about crashed databases available at
http://www.centurasoft.com/support/tech_info/knowledge_base/sbsinfo/sbsrecov.html
written by Centura's Technical Support which explains various techniques about
database recovery.
I keep getting the message
"log space critically short", however, there is definitely enough
space availabe. How can I avoid this?
There is a bug in certain version of SQLBase that will return the wrong amount
of space availabe. Thus, you can add the following (undocumented) setting to
the server section of your SQL.INI file:
DISABLELOGSPACECHECK=1
This tells SQLBase to ignore checking disk space on the logging drive. The
effect is that if you run out of log disk space SQLBase will immediately shut
down - with all open databases requiring recovery.
This is usually a problem since recovery requires > 2 times the disk space
that the current log files occupy - and you just ran out of log disk
space. You'd have to find more a lot more disk space elsewhere and move
your log files to that space before you could attempt recovery.
With log drive space checking, SQLBase monitors to see if the space required to
recover is greater than 3 times the space occupied by the current active
transactions. If space becomes too low to guarantee recovery if there
were a catastrophic failure (oops, just tripped over the plug...) then SQLBase
rolls back the earliest outstanding transaction, forcing a release of the
earliest log file(s), thus freeing up log disk space.
This is normally ideal behaviour because (a) you'd be daft to run any RDBMS
without at least as much log space free as the size of the database, and (b)
log space should only become critical if an application is doing something
stupid - like updating most of the database without ever commiting.
SQLBase's basic view is that it must protect the committed data at all costs.
Shutting down with recovery required, because of a poorly designed
application is normally considered bad form - it's better to remove the
offender and keep the database up and alive. However in situations like
loading a large database from scratch, you would normally ignore such
constraints and turn recovery off to improve load performance - and that
automatically turns off log disk space checking. If disk space runs out,
you've lost nothing other than time, because you've still got the source load
file(s).
Why did the size of my DBS file
drop so drastically when I ran a reorganize or did an unload/load?
SQLBase will never reclaim disk space. It will only continue to grow. If the
database has had lots of deletes and inserts, it will grow to be large and can
contain lots of "white" or empty space. By reorganizing the database,
you are creating a fresh database from scratch. Therefore, you will not have
the excess "white" space in the file and it will be smaller.
Why should I reorganize my SQLBase
database?
To reclaim disk space. A reorganize does an unload of your database. An unload
is an ASCII SQL script of the database. If corruptions occurr, sometimes
creating the database "fresh" can remove the corruption.
I do a backup snapshot every
Sunday. On Friday I had a total corruption, and my database is useless. By
using my Backup and my Logs how much can I recover?
If you have LOG BACKUP OFF, you can recover no more than your BACKUP SNAPSHOT
from Sunday. If you have LOG BACKUP ON then you can restore your backup
snapshot and all of your backed up logs. Then you can do a rollforward and you
will roll through all the logs as long as there are contiguous log numbers.
Can I create a database without
using the CREATE DATABASE command?
Yes. Create a subdirectory under the directory specified in the 'DBDIR='
statement of the SQL.INI with the same name as the new database. Now copy a
file START.DBS to this directory and rename it to your new database name with a
.DBS extension. Edit the server's SQL.INI file to include a 'database='
statement with the new database name.
Is it possible to specify more than
one DBDIR statement in SQL.INI?
Yes, however keep the following things in mind:
- The maximum length of a dbdir statement is 260 bytes.
- You can specify a 'SET DBDIR=' in SQLTalk to modify the DBDIR setting in the
SQL.INI file.
- When a 'CREATE DATABASE' is done, the new database will be created in the
first DBDIR directory specified in the SQL.INI file.
- You can start up the server with command line entries to override the SQL.INI
file settings with spaces separating the entries, ie. DBNTSRV
DBDIR=D:\CENTURA;D:\SQLBASE LOGDIR=D:\LOGS
Is there an easy way to get rid of all
tables created by a user?
You can write an SQL statement that will generate the statements to do
it.If you have a user BOB that you want to get rid of, from SQLTalk connect to
the database and type:
select 'drop table BOB.' || name || ';' from systables where
creator='BOB';
Move these statements (one for each table Bob owns) into the input window
and then execute them.
How Can I determine the size of a table in a
database?
You can get the approximate size of a table within the database with
the following query:
'SELECT name, pagecount FROM systables WHERE name NOT LIKE 'SYS%';'
You will get a result something like this:
NAME
PAGECOUNT
===========================
CITYCODE
980
Under pagecount, you will get a number representing the number of pages
required to hold the current table data. Each page is 1024 bytes. 980 pages is
roughly 1 million bytes taken up within the database. Make sure you execute an
UPDATE STATISTICS on DATABASE before running this query!
Ive got a corrupt table on my database.
Check database doesnt tell me which table has a problem.
Try doing an unload table by table until the unload dies. That will
isolate the table in error. See if you can access the data forward using an
index (ie. datetime), copying the data to a copy of the table, then when it
fails access the data backwards using the same index until it fails. Your copy
of the table will then have all data except the page where the error occurred.
Drop the original table and rename the copy table to the original name. For
more information, see a file RECOVERY.ZIP in the SQLBase library of Centura's
CompuServe forum.
What it the column SNUM in the table
SYSTABLES used for?
SNUM is a simple sequential serial number assigned to tables as they are
created. The table serial number is used to construct ROWIDs for the rows of
the table. There's not much more to it.
How can I quickly return the number of
rows in a table?
In SQLTalk, use the keyword ROWCOUNT. You can also call
SQLGNR from the C/API. Keep in mind that this function only works with SqlBase
and therefore break portability to another database platform. However, there
might be similar functions that return the same information.
When I try to access a table that I know
exists, I get an error stating that the table hasnt been created.
Whats going on?
When an object is created, the qualifier is set to the userid that
created the object. If you are SYSADM, you can use all objects created by
everyone; you just have to specify the qualifier.objectname. This allows
multiple objects with the same name.
If, however, you are not SYSADM, you can only access your own objects unless a
grant has been done on the object. For example, the user BOB, who created the
table PRODUCT could state
GRANT ALL ON PRODUCT TO BRENT;
and you would be able to see the table if you specify all your accesses to
PRODUCT as BOB.PRODUCT. A more open form of the GRANT statement allows you to
grant the authorities to PUBLIC, allowing everyone who can connect to the
database access to that table, again with specifying the qualifier name of
PUBLIC.
If you dont want people to have to put a qualifier in front of the table
name to access the table, do a
CREATE PUBLIC SYNONYM tablename FOR qualifier.tablename;
everyone will be able to see it without specifying the qualifier.
I get unexpected results when Im doing
comparisons with data that has null values. How do I do comparisons and get an
accurate result?
When SQL compares the values of two expressions, three
results can occur:
- If the comparison is true, the test yields a true result.
- If the comparison is false, the test yields a false result.
- If either of the two expressions produces a null value, the test
yields a null result.
You can't just test for a null value using a simple comparison, such as
'SELECT name FROM salesreps WHERE rep_office=NULL;' because if it is
null, the result will not be true (youre testing for =), it
will be null. Therefore, SQL provides a special null value test (IS NULL
or IS NOT NULL), ie. 'SELECT name FROM salesreps WHERE
rep_office IS NULL;'
You can combine this is null test with your other comparisons to
screen out records where the nulls occur.
How do I terminate a query before
completion?
Do you have SQLConsole? This will allow you to see what queries are taking
place, their relative cost to complete and the execution plan. It will also
allow you to abort a query, but you have to have set clientcheck=1 in your
server sql.ini file to abort a process before the result set is built.
If you don't have SQLConsole, you might download ABTUSR.ZIP. This program uses C/API calls to
query a server and abort all users connected to a specified database.
This could easily be modified to show you all the connections to a particular
database and allow you to specify which user process you want to zap.
Im doing an select that uses a bind
variable with a like clause. It runs dreadfully slow because
its not using an index that I think it should. What can I do to speed it
up?
The problem is with using 'like' with a bind variable. The optimizer
doesn't look to kindly on it. Try the same command in your program with a
literal such as 'BLA%' and it should use the index. To get around this
behaviour, you can concatenate the information you're looking for into the SQL
statement programmatically, so you don't use the bind variable.
Let's suppose we have an index on column A. The query: SELECT * FROM X
WHERE A LIKE :1 will not use the index because the LIKE clause is unknown,
that is, the optimizer does not know if the LIKE is left-anchored. But the
query: SELECT * FROM X WHERE A LIKE 'ISH%' would use the index because
the optimizer knows that the query is left-anchored.
Therefore, a user may find compiling the second query above everytime better
than storing and using the first query above. But, a clever trick here would be
to use the following query instead if you knew the LIKE clause would always be
left-anchored while the optimizer doesn't know this.
SELECT * FROM X WHERE A LIKE :1 AND A >=:2
The two bind variables could point to same program variable. And the optimizer
still would use the index when it is stored. The optimizer does not know
anything about the bind variables or the values in the bind variables when
determining the execution plan. Only left anchored LIKE clauses will use
available indexes. A left anchored LIKE clause is a LIKE clause where the value
is something like 'ISH%' or 'Mc%'. The LIKE clause is NOT left anchored if the
value is something '%son' or just '%'. Therefore, if you have a WHERE clause
like "WHERE columname LIKE :1" an index will not be used because the
optimizer does not know whether this bind variable will be left anchored or
not.
Again, a workaround for this, if you know that the system is designed to always
essentially force a left anchored like is to code the following: SELECT
columnames WHERE columname LIKE :1 AND columname >='A'
The trick here is that columnname >='A' forces the use of the index that
will also be used for the LIKE clause too.
When using the @YEAR( ) function in a Select
statement, I get mysterious results. Is this a bug?
The following results will appear when the @YEAR function is used:
DATE @YEAR(DATE)
========================
31-DEC-1999 99
01-JAN-2000 100
01-JAN-2001 101
This is correct - and documented - behaviour and thus not related to Year
2000 compliance. To retrieve 4-digit values, use the function @YEARNO( )
instead.
I'm coming up with bizarre results when
counting rows in a table. What is this caused by?
I do a 'SELECT COUNT(*) FROM Table' and come up with 100 rows. When I
then do a 'SELECT COUNT(*) FROM Table WHERE Amount=0' I get a result
of 10 rows, but when I do a SELECT COUNT(*) FROM Table WHERE Amount !=0
' I get a result of 85 rows! You've probably got 5 rows that have a null
value in the amount column. When SQL does a comparison, three results can
occur:
- If the comparison is true, the test yields a true result.
- If the comparison is false, the test yields a false result.
- If the column is a null value, the test yields a null result.
You can't just test for a null value using a simple comparison, such as:
select name from salesreps where rep_office=NULL;
Therefore, SQL provides a special null value test (IS NULL or IS NOT
NULL), ie.
select name from salesreps where rep_office IS NULL;
You can combine this "is null" test with your other comparisons
to screen out records where the nulls occur.
How do I calculate the week number of a given
date?
You can use the following SQL code with SQLBase:
SELECT ( @WEEKBEG( datecol ) - @WEEKBEG( @YEARBEG( @WEEKBEG( datecol ) + 3
) + 3 ) ) / 7 + 1 AS weeknumber
Instead of using @YEARBEG( @WEEKBEG( datecol ) + 3 ) + 3 you can
use @DATE( @YEARNO( @WEEKBEG( datecol ) + 3 ), 1, 4 ).
When I do deletes of a table where I have
foreign keys, the performance is horrible! What can I do to speed things
up?
The solution is that foreign keys need to be established
after an index exists on the joining column in each child table. For
each table with the foreign key:
1) Drop the foreign key.
2) Create an index on the column used for the key.
3) Recreate the foreign key.
I tried to do an insert into a table and got
the message table in an incomplete state. What causes this and what
do I do now?
It looks like the index that was specified when you set a primary key
for the file has been deleted. Drop the primary key for the file, create the
index, and re-specify the primary key for the file. That should take care of
your problem.
Is it possible to update a table with values
from a select statement?
You can't do an update with a subselect in version 6.x.x. This function got
first introduced to SQLBase with version 7.0. However, the workaround is to use
SQL to create SQL. There's an excellent sample for SQLWindows
demonstrating this technique available for download at
www.centurasoft.com/support/tech_info/knowledge_base/sbsinfo/t1tot2.zip.
Is it possible to group multiple
COMMITs together?
The keyword "groupcommit
" is a SQLBase configuration
keyword that controls how many COMMITs are bundled together before writing them
to disk. The ability to specify groupcommit
first became available
in SQLBase 5.1.4 and was designed to increase transaction throughput on systems
with many active client applications.
The first characteristic to keep in mind with this keyword is that it tells
SQLBase to wait until X COMMITs have occurred before writing them to disk. This
sounds interesting at first; however, experienced DBAs and developers are
probably a bit wary of a configuration keyword that causes SQLBase to wait,
especially for something as critical as COMMITs. In fact this concern forms the
basis for determining the best groupcommit
value for your system.
Groupcommit
specifies the maximum number of COMMITs that SQLBase
groups together before it physically flushes log buffers to disk. By default,
SQLBase waits for 1 system tick for the number of COMMITs to equal the
groupcommit
setting. This waiting period is controlled by another
configuration keyword, groupcommitdelay
, which controls the
maximum number of system ticks (this is determined by hardware; it usually
takes about 1/30 of a second) that SQLBase waits before it performs the log
flushing. If groupcommit
and groupcommitdelay
are set
improperly, SQLBase performance will degrade if both of the following
conditions are satisfied:
- The number of active users is less than the groupcommit
- The number of inactive users is greater than that of active users
Inactive userA user simply connects to a database and does nothing. In
SQLBase, a transaction is automatically started for this user at the time of
connection. Inactive users have open transactions.
Active userA user connects to a database, performs operations and issues
COMMITs frequently.
I get an error message about
"Application deadlock", what should I do?
This error occurs, when two applications are trying to get locks on same pages
in the database, and they each already have a lock on a page the other one
needs. Locks get placed when data is accessed or manipulated, depending on the
isolation level. For example:
User 1 has a lock on Page 45, needs a lock on Page 107.
User 2 has a lock on Page 107, needs a lock on Page 45.
The two applications are deadlocked. If they both wait for the page they need
to become unlocked, the two will hang in an infinite loop. Therefore, SQLBase
picks one, fails the SQL statement with the "deadlock" error,
releases the locks of the failed application and then completes the task of the
second one.
A technique to reduce the occurences of deadlocks in your applications is to
try to adjust the order of insert/update/delete statements in equal order, so,
according to the above sample, it is more likely that both applications try to
gain a lock on page 45, before they try to access page 107, and therefore, the
second application is delayed until the first one completes its task.
What does the error message 63731 stand
for?
Such a high error message number is actually invalid according to the
SQLBase documentation. However, it turns out to be a bug and stands for 65536 -
63731=1805, timeout. Errors above 60000 can be converted with the formula
"( 65536 - [Error number] ) * ( -1 )".
I can't find any documentation on error
number 63735. What does it mean?
You ought to know that 63735 is actually 1801 application deadlock (transaction
rolled back). The value of 63735 is -1801 mistakenly casted into an unsigned
short.
My database engine terminates with error
703. What does this mean?
Error 703 most probably is a hardware problem. Run the "check
database" command on from a client and see, what it says. The database
engine will not terminate when check database finds an error. If check database
finds an index corruption, drop and recreate the index. If check database finds
an extent page corruption, you have to find the corrupted row. Create a new
table with the same structure as the corrupted table, then insert all
non-corrupted rows into the new table, drop the old corrupted table and rename
the newly created table to the name of the dropped table. This way you at least
saved your non-corrupted data rows. After you fixed everything and made a
backup, it's a good idea to do an unload/drop database/create database/load
sequence (load with recovery set to off) or a reorganize instead. This will
completely rebuild your database so afterwards you can be sure, that it is in a
correct state. See also:
http://www.centurasoft.com/support/tech_info/knowledge_base/sbsinfo/sbsrecov.html
I've received a negative error number. What
is the difference to a positive return code?
A negative error means the database server has initiated a rollback. In SQLBase
the extent of the rollback can be limited to the last compiled command or the
entire transaction. A positive error means that the transaction is still active
(i.e. the application can choose to commit or rollback as desired).
Why do I get 00422 Invalid Servername
Length?
Older clients don't work against newer servers, for instance resulting in the
error 00422 Invalid Servername Length. This is a problem because Centura Team
Developer comes with 6.x client which won't work with a SqlBase 7.x database.
You need to replace SQLWNTM.DLL and the COMDLLs with the newer versions.
I get an error message about
"Application deadlock", what should I do?
SQLBase will automatically ROLLBACK the later transaction in case of an
application deadlock. You just have to take care of the local error handling,
the database takes care of that error itself.
I have the error code xxx and don't know
how to get more information about it, what should I do?
Usually the error code from SQL operations refers to a longer description
either obtained from the standard SQL error message box or from a little
utility called "DBERROR" which is installed with your Centura
database products. If the error code is greater than 20000, it is an error from
a database router (Oracle, SQLServer, Sybase, ... ) which means you subtract
20000 to get the native error code of the database platform, e.g. for 20543 the
native Oracle error code would be 543. With this number you can check the
documentation of the database server and see what it means.
How do I reduce the number of timeouts in
my application?
- The default mode for SQLWindows when it connects to SQLBase is RR - Read
Repeatability. When you connect your cursors, you should change the the
isolation level to RL - Release Locks. This alone may reduce your Timeouts
dramatically - if you are not doing this now.
- Make sure that all Transactions ( update/insert/delete ) are handled quickly
and a commit ( or rollback ) is done at the end of the Transaction before ANY
user display or interaction is allowed. After all, modification statements will
put an Exclusive Lock on the data, and will increase the likelyhood of Timeouts
until commited or rolledback.
- Alter the Timeout period after you connect the cursors. The timeout period
should be set to at lease 30 - 40 seconds - something reasonable.
- You may wish to implement RO - Read Only isolation mode for cursors that are
just going to be used for reading the database. Please read about ReadOnly and
see if you would like to use it. There may be a performance hit when using
RO.
- Do NOT use SqlImmediate( ... ) or SqlExists( ... ) in your SQLWindows code!
These commands will lead to unnecessary timeouts! You can easily write your own
SqlImmediate command by defining your own cursor, etc. And you will have MUCH
more control as to what is happening in your code. I can NOT be too emphatic
about not using these two commands.
- It is possible to increase the PCTFREE parameter in a table definition. For
more information about this issue, consult the online books. This will probably
increase your database size quite significantly, though.
I try to use the constant DATETIME_Null in
stored procedures but it doesn't seem to work. What is going on?
There is a bug regarding this constant in SQLBase version 6. This has
been corrected in SQLBase 7.0.1.
As a workaround, you could use
Set bOk=bOk AND SqlPrepareAndExecute( hSql, 'SELECT @DATE(NULL, NULL, NULL)
FROM SYSTABLES INTO :dDATETIME_Null' )
Set bOk=bOk AND SqlFetchNext( hSql, nFetch )
I got a duplicate server
error when trying to install SQLBase on Novell.
The NLM has sensed that there is another server with the same name as
specified in the SQL.INI file of the server.The ERROR.SQL file specifies
"Remedy: Verify that the server being installed does not already exist on
the network".
Check the server SQL.INI file where the server name is specified. Change the
server name there to be something unique (i.e. not server1). Make sure that
there are no other servers on the network using the same server name. If you
have another server using SQLBase, the server name on that server might still
be the default in which case one name must change.
Im getting an error 182 when
trying to do a backup on a Novell server. What does this mean?
Error message 182 indicates that SQLBase cannot create a backup file. Make sure
you have enough disk space for the backup and the directory exists. Also make
sure that if you're doing a backup 'ON SERVER' and that a Novell
path is used and not a mapped drive, ie. server1:sys\backups
rather than F:\backups.
When shutting down SQLBase on Novell, I
get the message "1.1.14 Module did not release 1152 resources". Is
this a known problem?
This is a temporary event. The resources get released back to the system in a
couple minutes. This is nothing to worry about.
SQLBase on my NT server runs really
slow. How can I speed up performance?
Is the NT server set up as a file server or application server? Go into Control
Panel, select network, then highlight 'server' from the selectbox. Make sure
the default usage for the server is set to 'Application' rather then 'File'.
This will speed things up by about 100%!
Watch out for screen savers. All 3D animation screen savers will use 100% of
the processor's utilization.
Is there a way to set up SQLBase as a
service on NT prior to version 7?
If you have ever worked with SQLBase Desktop for Windows NT, then you will have
noticed that when you LOGOFF from NT Server the SQLBase server process is
terminated. It would be preferable for the SQLBase server to remain available
when no-one is physically logged onto the NT Server terminal.
As part of the Windows NT environment you can set up services. These services
are tasks that NT manages without user intervention. They normally take the
guise of system maintenance / monitoring tasks.NT service tasks survive the
logoff procedure. Therefore if we could set up SQLBase as an NT service we
would have access to the databases at all times. Setting up SQLBase as an NT
service brings about a number of advantages:
- SQLBase survives NT logoff / logon sequences
- SQLBase starts and services requests even when no user is logged on the NT
Server
- SQLBase automatically re-starts in the event of a system reset
It is possible to configure the NT Server environment such that your SQLBase
server is treated just like an NT service.
Contained within the NT Resource Kit (available from Microsoft) are two
utilities:
- INSTSRV.EXE - To install new services onto NT servers
- SRVANY.EXE - The controlling service program that allows you to configure
applications as services. This is supplied with a SRVANY.WRI.
Copy SRVANY.EXE and INSTSRV.EXE to your NT server then run INSTSRV.EXE to
install SRVANY.EXE as a service e.g. INSTSRV <A SERVICE NAME>
C:\TOOLS\SRVANY.EXE Enter the Services applet (in the Control Panel), find and
select <A SERVICE NAME> and click on the STARTUP... button. Configure the
service as Automatic. Click the Allow Service to Interact with Desktop
Specifying SQLBase Server and its Parameters.
Run REGEDIT32.EXE. Under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\<A SERVICE NAME> add
a Parameters key. Under that Parameters key add an Application value of type
REG_SZ and specify the parameters for launching SQLBase e.g.
C:\SQLBASE\DBNTSRV.EXE. Under the Parameters key add an AppDirectory value of
type REG_SZ and specify the current directory for SQLBase to use e.g.
C:\SQLBase. If this is missed out SQLBase will load but it will not be able to
find any of its components - MESSAGE.SQL, etc.
Starting and Stopping the Service
You can control both starting and stopping SQLBase via the Control Panel
Services applet. Since the Service has been configured as automatic, it will
automatically startup. Be aware of stopping the service whilst connections to
SQLBase databases exist. It is not a graceful shutdown of SQLBase and any
existing connections will be cut instantly.
Points to Remember
When you logoff from the NT server, a SQLBase dialog will be displayed with a
Shutdown Warning. If all you are doing is logging off from NT Server then
ignore this message. Also it will look like SQLBase has been terminated. It has
not. It will still be available from your client workstations
For more in-depth details on setting up applications as Services please see the
SRVANY.WRI file from the NT Resource Kit.
Ive just changed from Windows 3.1
to Windows 95 and am now using an NT server. What do I need in the SQL.INI
files?
Here's the appropriate snippet from the SQL.INI for the NT Server:
_________________________________________________
[dbntsrv]
servername=sqlperx,sqlws32
password=xxxx
cache=3000
sortcache=4000
asyncio=0
readonly=0
oracleouterjoin=0
logfileprealloc=1
partitions=0
optimizerlevel=2
dbdir=d:\sqlbase
logdir=d:\sqlbase
tempdir=d:\sqlbase\temp
locktimeout=275
DBNAME=BAMS,SQLWS32
DBNAME=MODEL,SQLWS32
[dbntsrv.dll]
comdll=SQLWS32
______________________________________________________
And from the client SQL.INI:
______________________________________________________
[winclient]
clientname=WinUser
connecttimeout=20
[winclient.dll]
comdll=sqlwsock
[winclient.wsock]
serverpath=sqlgyl2,131.2.3.103/prs,prsdb,dis,distr,almevent,delay
serverpath=sqlperx,203.3.21.210/model,bams,demo
[win32client]
clientname=WinUser
[Win32client.dll]
comdll=sqlws32
[win32client.ws32]
serverpath=sqlgyl2,131.2.3.103/prs,prsdb,dis,distr,almevent,delay
serverpath=sqlperx,203.3.21.210/model,bams,demo
____________________________________________________________
An explanation of the client SQL.INI entries (the server is self-explanatory)
and some 'gotchas':
The comdll=statement in the WinClient section will be used for 16-bit
applications (SQLTalk, SQLWindows applications, etc.). The WinClient.Wsock
section will be used for 16-bit applications. The Win32Client sections will be
used for 32-bit applications (SQLNTtlk, etc.). The serverpath=statement must be
used with TCP/IP. There are various options for the serverpath name statement
depending on your situation. If you've got more then one server you will be
connecting to, you must specify the names of the databases on each
server. If you've only got one server, you can use an '*' after the '/' to
indicate all databases. This results in a bit of maintenance overhead when you
add a new database. You can 'create', 'drop' 'install database', 'deinstall
database' without changing the client SQL.INI file, but once the database is
created, you must leave all Centura applications, change the SQL.INI file, then
go back in and attach to the database. This could be a real pain for you if you
have an SQL.INI file for each client. We've got one SQL.INI on the network that
all people use. It's just a matter of changing the one SQL.INI file and
everyone can see the new database.
The multiple server/multiple section (16-bit, 32-bit) is a maintenance pain as
well. Take a look at the contributed program that will replace whole sections
in an .INI file (multiple sections using multiple replacement files),
INIUPD.ZIP.
How do I install SQLBase as a service on
Windows 95 prior to version 7?
Services in Windows 95 are applications started with the boot process. They
will run before someone logs into the workstations. To add a service, open
REGEDIT.EXE and go to the section
HKEY_LOCAL_MACHINE_SOFTWARE\Microsoft\Windows\CurrentVersion\Runservices
. Insert a new string right below that key and give it a unique name, like
"SQLBaseServer". Set the value of this string to the path and file
name of your database server, for example "C:\Centura\DBNT1SV.EXE".
To execute it minimized, the path statement has to be preceded with "start
/m".
Is there any way to allow the 16- and 32-bit
applications to talk to the same local SQLBase server?
A detailed description is also available at the
Downloads section.
Before SQLBase for Windows 95 became available, the local 16-bit SQLBase
engines (DBWSERVR.EXE and DBWINDOW.EXE) were fairly straightforward and easy to
get up and running. The database engines and the applications were both 16-bit
and they were aware of each other in that the server would launch automatically
when needed. With SQLBase for Windows 95 (DBNT1SV.EXE), there are a few more
considerations to take into account:
- SQLBase for Windows 95 is a 32-bit server, which means it can address more
memory that was possible with the 16-bit engines.
-16-bit applications can only connect to SQLBase for Windows 95 via the 16-bit
WinSock TCP/IP DLLs.
- SQLBase for Windows 95 does not launch automatically upon database or server
connection requests from 16-bit client applications.
Configuring TCP/IP for Windows 95 on a machine that does not have any network
adapters can be tricky and is not altogether straightforward. The steps listed
below are designed to help you install and configure TCP/IP for Windows 95 so
that the SQLBase Server for Windows 95 can be accessed by both 16- and 32-bit
applications.
If you have already installed TCP/IP on your Windows 95 machine and it
is working properly, use the steps in this section to configure SQLBase. The
following steps assume you are familiar with SQLEdit; (if you aren't, please
refer to the SQLBase Starter Guide):
- Use SQLEdit to configure the Windows 95 Server ([dbnt1sv]) to support both
Anonymous Pipes (SQLAPIPE.DLL) for 32-bit client applications and WinSock
TCP/IP (SQLWSOCK.DLL) for 16-bit client applications. Additionally, set the
server name to be "LOCAL32".
- Use SQLEdit to configure the Windows 95 client ([win32client]) to connect
with Anonymous Pipes.
- Use SQLEdit to configure the Windows 3.X client ([winclient]) to connect with
TCP/IP (WinSock). Additionally, you need to set the serverpath parameter value
to "LOCAL32,localhost/*".
- Launch SQLBase Server for Windows 95.
- Now you are ready to connect to the server from either the 16- or 32-bit
versions of SQLTalk for Windows.
If you have not installed TCP/IP, use the following steps to configure
TCP/IP for Windows 95:
- From the Network dialog (accessed via the Control Panel), press the Add
button. The Select Network Component Type dialog is displayed.
- Select Network Component Type; select Protocol and press the Add button. The
Select Network Protocol dialog is displayed.
- Select Network Protocol; select Microsoft as the Manufacturer and TCP/IP as
the Network Protocol and press the OK button. The Select Device dialog is
displayed.
- Select Device; If you don't have a NIC in your PC, select Microsoft as the
Manufacturer and Dial-Up Adapter as the Model. Otherwise, select the
Manufacturer and Model for the NIC that is installed and press the OK button.
The Network dialog is displayed again with the currently installed network
components.
- Network. If you are performing this install so that you can communicate with
a local 32-bit SQLBase engine, it is important to note that the only components
that are required are Dial-Up Adapter and TCP/IP. To remove unnecessary network
components, select any of the components you wish to exclude and press the
Remove button.
- Network. Once you are satisfied with the installed network components, press
the OK button. At this point you may be prompted to insert the Windows 95
distribution media. Once the changes have been made, the System Settings Change
dialog will be displayed.
- System Settings Change; Pressing the Yes button will restart your computer
with the new network settings.
- After the computer resets, complete the SQLBase and SQLTalk configuration
steps as listed above.
What is the SQLBASE environment variable
used for?
There is a SQLBASE environment variable which points to the directory in which
SQLBase should look for the files SQL.INI, ERROR.SQL, MESSAGE.SQL, COUNTRY.SQL,
and MAIN.INI. This environment variable was created as a solution to the
problem of users having multiple copies of the same files on a computer.
To specify the \GUPTA directory, add the following line to your
AUTOEXEC.BAT:
Set SQLBASE=C:\GUPTA
If you specify the SQLBASE environment variable, SQLBase looks only in this
directory. Otherwise, the search order for the above files is:
1. Current directory
2. \SQLBASE (of current drive)
3. Root directory (of current drive)
4. Directories specified by the PATH environment variable
5. Directories specifed by the DPATH environment variable (for OS/2 2.0
only).
You can set the SQLBASE environment variable on both the client and server.
Are there any problems I might face
when working with Windows 98?
Centura Software Corporation is in the process of certifying its 32-bit
products on Windows 98. Visit
http://www.centurasoft.com/support/tech_info/bulletins/win98warn.html
to obtain further information.
In which order should I load the NLM
modules when using SQLBase on Novell platforms?
NLMs required for SQLBase
7.0.x
Protocol | NetWare 3.1 | NetWare 4.1.0 | NetWare 4.1.1 (NDS) |
NetWare 4.1.1 (Bindery easier than NDS) |
TCP/IP | comdll=sqltip | comdll=sqltip | comdll=sqltip | comdll=sqltip |
dsapi tli dfd dll tlidll exfmgrnw dbnservr |
dsapi tli dfd dll tlidll exfmgrnw dbnwsrv |
dsapi tli dfd dll tlidllds exfmgrnw dbnwsrv |
dsapi tli dfd dll tlidll exfmgrnw dbnwsrv |
|
IPX/SPX | comdll=sqltsp | comdll=sqlspx | comdll=sqltsp | comdll=sqltsp |
tli spxs dfd dll tlispx exfmgrnw dbnservr |
dsapi dfd dll spxdll40 exfmgrnw dbnwsrv |
dsapi tli spxs dfd dll tlispxds exfmgrnw dbnwsrv |
dsapi tli spxs dfd dll tlispx exfmgrnw dbnwsrv |
NLMs required for SQLBase 6.1.2
Protocol | NetWare 3.1 before PTF8 |
NetWare 3.1 PTF8 onwards |
NetWare 4.1.0 | NetWare 4.1.1 before PTF4 | NetWare 4.1.1 PTF4 onwards |
TCP/IP | comdll=sqltip | comdll=sqltip | comdll=sqltip | comdll=sqltip | comdll=sqltip |
tli dfd dll tlidll dbnservr |
tli dfd dll tlidll dbnservr |
tli dfd dll tlidll dbnwsrv |
tli dfd dll tlidll dbnwsrv |
tli dfd dll tlidll dbnwsrv |
|
IPX/SPX | comdll=sqlspx | comdll=sqlspx | comdll=sqlspx | comdll=sqltsp | |
dfd dll spxdll dbnservr |
tli spxs dfd dll tlispx dbnservr |
dfd dll spxdll40 dbnwsrv |
Not certified apply latest ptf or ptf4 if you dont want to lose @DECRYPT | dfd dll spxs tlispx dbnwsrv |
|
TCP/IP & SPX | comdll=sqltip comdll=sqltsp |
||||
dfd dll tli spxs tlidll tlispx dbnwsrv |
After a Netware crash, the SQLBase
server reports "missing logfiles". What should I do now?
Sometimes the most recent log file was in use by the database during a Netware
file server crash. In this case the Netware server might have deleted the log
file during the restart process which confuses SQLBase. Undeleting the log file
with Netware's FILER.EXE utility can restore this log.
My Win95 clients are getting the error
"Cannot load sqlspxw.dll" and the configuration is the same as the
Win3.1 clients. I have the Novell client installed. How can I fix
that?
See
http://www.centurasoft.com/support/tech_info/knowledge_base/sbsinfo/fileli~1.html.
Whenever you receive a "Cannot load library ..." error this indicates
that you either cannot load the listed DLL or one of it's dependants. Make sure
that you have all the dependant DLL's.
Do I need to upgrade my client files if
I upgrade my database version ( 5.x - 6.x )?
While it is recommend you do so, as long as you do not try to utilize any of
the new features of the 6.x database, the API is backward compatible.
What TCP/IP port number
does SQLBase use?
By default SQLBase uses 2155 as the port number, but using the
LISTENPORT keyword in the server's SQL.INI protocol section can change this.
Depending upon the version of the server the version of TCP/IP you are using,
your entry should be something like this:
[<servername>.<tcp/ip protocol>]
listenport=<portnumber>
If you do not use the default portnumber, it must be explicitly
coded in the SERVERPATH keyword in the client's SQL.INI in the client protocol
section. The entry would look something like this:
[winclient.wsock | win32client.ws32]
serverpath=<servername>.<hostname|ip
address>,<portnumber>/<servicename1>|*{,servicenamen>}
Here is an example using the 32-bit unlimited server and a 32-bit client:
[dbntsrvr.ws32]
listenport=2500
[win32client.ws32]
serverpath=server1,127.0.0.1,2500/island,video,payroll
Which files are copied when installing
the desktop version of SQLBase 7?
There is a paper available at
www.centurasoft.com/support/tech_info/knowledge_base/sbsinfo/SQLBaseFileInfo.html
that gives a list of files that are installed with SQLBase Desktop v7.0.0,
together with descriptions.
I've tried to install SQLConsole for
6.0 and it won't let me create the local database! Why not?
Seems that you have to create the local database for SQLConsole using
just the right steps:
1. Start it up in TOOLS ONLY mode (should already be doing that without any
auto features on).
2. In the 'Options' menu you'll see two options, 'Create SQLConsole database'
and 'Enable automatic features'. The 'Create' option doesn't work well. Select
the 'Enable' option. This will load the local engine (as defined in your client
SQL.INI file, [dbwservr] section), try to connect to the local database and
give you an error stating that it can't connect and asking if you want to
create the local database. Select 'Yes'.
Is it possible to have "dynamic
scripts" for TLKNRTR (the old DOS version of SQLTalk)?
TLKNRTR can be run as a command executor from within DOS batch files. The
commandline parameter INPUT=CommandFile tells TLKNRTR which set of commands to
execute. Sometimes it is desired to have different commands executed depending
on environment variables. The trick is to create the command file from within
the batchfile as it is done in the following example:
@echo off
rem set variables
set BkpDir=\\machine\vol\backupdir
set DB=MyDB
set DBSrv=MyServer
set Auth=sysadm/password
rem create the commandfile
echo set server %DBSrv%; > tmp.cmd
echo connect %DB% 1 %Auth%; >> tmp.cmd
echo backup database from %DB% to %BkpDir% on server; >> tmp.cmd
echo release log; >> tmp.cmd
echo backup logs from %DB% to %BkpDir% on server; >> tmp.cmd
echo exit; >> tmp.cmd
rem execute the commandfile
tlknrtr noconnect bat input=tmp.cmd output=tmp.log
del tmp.cmd
How can I start SqlTalk to run a script
automatically?
Use following parameters
SQLTALK BAT INPUT=<skriptfile.wts> OUTPUT=<Logfile>
I am trying to start QUEST, but there are
no activities displayed exept "New" and "Open". How can I
fix this?
Quest searches for a file "QUEST.INI", which includes definitions for
the activities. You can try to reinstall SQLWindows/Quest to get correct
entries. Instead of that you can check the entry "questdir" in the
file WIN.INI under section [QUEST], which should point to a valid file
"quest.ini" that defines the activities supported.
When trying to insert an empty
string, this will get replaced with a space. Can this be avoided?
Centura's database router behave as follows:
Client entry | Oracle | Informix | MS SQLServer | Sybase | IBM DB2 |
empty string ("") | NULL | space | space | space (constant) NULL (bind variable) |
space |
NULL | NULL | NULL | NULL | NULL | NULL |
space | space | space | space | space | space |
STRING_Null | NULL | NULL | NULL | NULL | NULL |
To avoid the problem of inserted spaces, add
the keyword "SetZeroLengthStringsToNull=1" setting to SQL.INI under
[WinClient] and [Win32Client] which fixes the writing and reading of empty
strings.
I'm seeing a significant decrease in
speed against my backend when using front-end resultsets over time? How can
this be avoided?
The router for non-SqlBase-backends creates temporary files with filenames like
frsXXX (front-end resultset) to simulate some behaviour otherwise unique to
SQLBase. The files should be located in your TEMP-Folder. When your computer
crashes with resultsets open or you end your database-session uncleanly, this
files are never automatically deleted and can slow down the operation of your
application (browsing through thousands of frsXXX-files takes time!). To regain
the speed you can shut down all database applications on the client and delete
all frs*-files. Another way to automatically clean up this files is to put a
line
DEL %TEMP%\frs*
in your client startup script (AUTOEXEC.BAT).
Also you could reduce the amount of FRS files with turning Result Sets
generally off with "Set SqlResultSet=FALSE" and enable only those
SqlHandles which need ResultSet on (TableWindow browsing and such). But be
aware that such a change is fundamental and will need much testing to be sure
it doesn't break anything!
How to trace
SQLRouters?
In SQLWindows or CTD, you can trace the non-SQLBase routers by adding an entry
of the type log=<filename> to the specific section in SQL.INI,
as shown in the following examples:
SQLRouter | Section 16-bit | Section 32-bit |
---|---|---|
Sybase | [winclient.sybw] | [win32client.syb32] |
Oracle | [winclient.oraw] | [win32client.ora32] |
ODBC | [winclient.odbw] | [win32.client.odb32] |
Using front-end result sets, I
periodically receive error number 163 ("Result set not active"). How
can this be avoided?
This error message is usually related to FRS*:* files that did not get deleted
in case of a crash, for example. Before you start-up your application, look
into the TEMP directory (C:\TEMP or similar) and delete all files starting with
"FRS". Additional detailed information is available at
www.centurasoft.com/support/tech_info/knowledge_base/ctdinfo/error163.html
and under the topic Performance decrease with
FRS.
How do I have to configure a
connection to Oracle for 16-bit (SQLWindows) applications?
- Execute the SETUP.EXE from the "Windows" directory of the Oracle CD
to install the 16-bit client. Thereafter, run 16-bit NET*Easy to configure the
database alias, e.g.
Database Alias: MyDB
Protocol: TCP/IP
Host Name: 192.1.11.28
Database Instance: MyDB
Afterwards, run 16-bit SQLPlus (Oracle for Windows) to connect to the database
using MyDB alias. If the connection cannot be established, it is an Oracle
setup problem.
Else, run SQLTalk to connect to the same database alias. That requires the
correct installation of Centura's connectivity software for Oracle and the
following entries in SQL.INI:
[winclient.dll]
comdll=sqloraw
[oragtwy]
remotedbname=MyDB,@TNS:MyDB
Consult Centura's online help as well as the documentation for further
information.
I want to insert/update data into an
Oracle-database and I receive the error "ORA-01465 invalid hex
number" all the time. What am I doing wrong?
Perhaps one of your columns in the database is of type "LONG
RAW", whereas you should set the type of the long-column to binary using
SqlSetLongBindDatatype( nBindVarNumber, nDatatype) between your SqlPrepare()
and your SqlExecute().
On non-US versions numeric values are
not displayed correctly when data is fetched from an Oracle database. How can I
avoid this?
You have to change the settings for thousand and decimal separators to the
international display format used at your locality. To have numeric values
appear in the German format (1.000.000,00) you'd have to execute the following
statement:
ALTER SESSION SET NLS_NUMERIC_CHARACTER='.,'
That way you can also change the display settings for dates with
ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD-HH24.MI.SS'
and sort directions by executing
ALTER SESSION SET NLS_LANGUAGE=GERMAN.
How do I set the optimizer level?
The optimizer settings can be changed with the statement
ALTER SESSION SET OPTIMIZER_GOAL=<level>
where level is one of the following values:
RULE | rule-based optimization (index) |
CHOOSE | cost-based optimization (index + amount of data + location of data |
FIRST_ROWS | cost-based optimization for the first rows of the result set |
ALL_ROWS | cost-based optimization for the entire result set |
Is it possible to view the
execution plan for a SQL statement with Oracle?
Oracle displays the execution plan for database objects with the
command "EXPLAIN PLAN". In order to use it, you have to make sure
that the table "PLAN_TABLE" exists. Its definition is located in the
file UTLXPLAN.SQL:
CREATE TABLE PLAN_TABLE (
statement_id varchar2(30),
timestamp date,
remarks varchar2(80),
operation varchar2(30),
options varchar2(30),
object_node varchar2(128),
object_owner varchar2(30),
object_name varchar2(30),
object_instance numeric,
object_type varchar2(30),
optimizer varchar2(255),
search_columns numeric,
id numeric,
parent_id numeric,
position numeric,
cost numeric,
cardinality numeric,
bytes numeric,
other_tag varchar2(255),
other long);
How do I connect SqlWindows to Oracle
8?
Use 16-bits SqlNet 2.3, available with Oracle 7.3. The default installation on
Windows NT, 95 or 98 is 32-bit. So, in order to install the 16 bits SqlNet,
you'll have to go into the windows directory of the Oracle CD and execute the
16-bit installer.
What are the differences between
SQLBase and Oracle and what do I have to take care of while porting
over?
SQLBase | Oracle | |
---|---|---|
Naming Conventions | SQLBase allows creation of tables with column names like ACCESS and ORDER. | Oracle does not. The list of reserved words in Oracle is longer. Changing the column names to similar sounding names in the software itself (before porting to Oracle) to names like ACCESS_X would be one way to get around it. |
Renaming Columns | SQLBase supports renaming columns. | Oracle doesn't support
renaming and dropping of columns. To drop a column in Oracle (a) you have to create a new table (b) copy the existing data to the new table (c) drop constraints/indexes from the original table (d) add the same constraints/indexes to the new table (e) drop the original table (f) rename the table. Third party tools (like ERwin) are available on the market that can generate a SQL script for doing this automatically. |
Altering Tables |
The syntax of ALTER TABLE ... add FOREIGN KEY in SQLBase is different from Oracle. |
|
Dropping Tables | SQLBase and Oracle behave differently when dropping tables. Consult the documentation. | |
Constraint names | In SQLBase the same constraint name can be used with two different tables. | They have to be unique over the whole database in Oracle. However Oracle allows longer constraint names than SQLBase. So, while porting the data structure to Oracle the SQLBase constraint name can be changed to <Table Name>_<Constraint Name>. |
Referential Integrity | SQLBase allows ON DELETE SET NULL as a RI option. | Oracle does not allow ON
DELETE SET NULL constraint. Each constraint has to be set to ON DELETE
CASCADE or ON DELETE RESTRICT. |
Indices | Index definitions in SQLBase allow function names like @UPPER(). | Oracle doesn't allow
this. In RDBMS the result of a query is the same whether an index exists or not. Indexes are used to speed-up certain types of queries. So, dropping such indexes in Oracle will not cause inaccurate results. However, performance-loss will have to be monitored and Oracle-specific solutions might have to be programmed. |
Primary Key Generation | As long as you don't manually create a unique index on the primary key columns, SQLBase tables are in an incomplete state. | When you specify a Primary Key in Oracle, a unique index gets automatically created on those columns. |
Date/Time | SQLBase supports DATE, TIME, and DATETIME. | Oracle implements date/time storage in DATE and has no support for storing microseconds. It is also worth studying Oracle documentation about ALTER SESSION SET NLS_DATE_FORMAT... |
Current Date/Time | You can access the current date/time with SYSDATE, SYSTIME, or SYSDATETIME. | The current date and time is stored in the system variable "SYSDATE". |
Characters | Datatype CHAR( ) of SQLBase is equivalent of Oracle VARCHAR2(). | Don't use VARCHAR() in Oracle. |
LONG VARCHAR |
LONG VARCHAR SQLBase supports having more than one LONG VARCHAR column in a table. SQLBase allows a table to be created as... CREATE TABLE table1 (col1 LONG VARCHAR, col2 LONG VARCHAR) |
LONG VARCHAR2 / LONG RAW Oracle allows only one such column per table. Splitting the table vertically into two could be one solution. Storing multiple rows in the same table would be another solution. Oracle does not allow a table to be created as... CREATE TABLE table1 (col1 LONG VARCHAR2, col2 LONG VARCHAR2) CREATE TABLE table1 (col1 LONG RAW, col2 LONG RAW) CREATE TABLE table1 (col1 LONG RAW, col2 LONG VARCHAR2) |
Data Contents |
Text and binary data can be stored in LONG VARCHAR columns in SQLBase. | Binary data has to be stored in LONG RAW in Oracle. |
Inserts/Updates |
Usage of SqlSetLongBindDatatype( ) is not mandatory. | Before inserting or updating a row having LONG RAW column via a bind variable in Centura, SqlSetLongBindDatatype() has to be performed. |
Selecting Function Output | Functions in
SQLBase like @NULLVALUE( ) have different names in Oracle and might expect
other types of parameters. Special attention is required to be given to string
and date related functions. For example, the 1st character of a string in SQLBase is position zero. The first element of a string in Oracle is referred to as position one. Possible workarounds: |
|
(a) write a
SQL "filtering" function which accepts a SQL statement in SQLBase
syntax (string), translates it into Oracle syntax and returns the SQL command
string in Oracle syntax. Then change all instances of SqlPrepare(sStatement) in
your program to SqlPrepare( OracleTranslate( sString ) ) (b) Code each instance of SqlPrepare('SELECT ....') as follows... If bConnectedToSqlBase Set sStatement='SELECT ........' Else If bConnectedToOracle Set sStatement='SELECT ........' SqlPrepare (sStatement) After SqlConnect() one can use SqlGetParameter (DBP_BRAND) to find out whether one is connected to SQLBase or Oracle. |
||
Select Distinct |
SQLBase supports SELECT DISTINCT PatientId, PatientName ... ORDER BY PatientId |
In Oracle, you have to execute SELECT DISTINCT PatientId, PatientName ... ORDER BY PatientId, PatientName |
Selection Columns | SQLBase supports SELECT PatientId, ..., PatientId ... ORDER BY PatientId |
You can only specify each column once in a Oracle select statement. |
Column Position IDs | SQLBase supports SELECT Gender FROM tblPatient GROUP BY 1 |
Oracle expects the same
to be written as... SELECT Gender FROM tblPatient GROUP BY Gender |
ROWIDs | In SQLBase the value of
ROWID changes whenever a record gets updated. This fact is used by many applications to implement optimistic locking mechanism. (Read a record without a lock, update it WHERE ROWID=:sROWID, if no records are modified then someone else modified it between you reading it and updating it). |
The ROWID assigned to
each record in Oracle is static. It does not change when the record gets updated. To implement such an optimistic locking in Oracle, it sounds attractive to add a timestamp (of last modified date/time) in each record. However, Oracle timestamps do not include microseconds and hence are not sufficiently microscopic in nature. It is much better to use a Sequence to store a unique value in the updated record. |
Updates | SQLBase supports UPDATE ... CHECK EXISTS |
Oracle doesn't support this feature. Try to use SqlGetModifiedRows( ) instead. |
Cursor Context Preservation | If you specify cursor
context preservation (DBP_PRESERVE) for a handle hSql, doing a ROLLBACK on
handle hSql2, the cursor context of hSql is preserved in SQLBase. |
Oracle doesn't support
this feature. Instead, use (a) ROLLBACK to SAVEPOINT or (b) do rollback via in-line PL/SQL block - Execute (Begin Rollback; End) |
Resultsets | Verbs like
SqlFetchPrevious() and SqlGetResultSetCount() can be executed only in Resultset mode. SQLBase creates backend resultsets. |
The database routers maintains frontend resultsets for Oracle, which are slow. |
"FETCHTHROUGH" | SQLBase supports re-reading resultsets without preparing/executing the original statement again. | This is not supported by
Oracle. Re-execute the statement and fetch it again. Evaluate changing your
architecture to the following scenario: Base your form on Sql Handles hRead, hWrite and hReRead. hRead constructs a cursor of only ROWIDs which are read into an array. When the user wants to read First, Last, Next, Previous records of the cursor, hReRead is used (with in-array ROWID) to read the actual record and present it to the user. The program maintains the current-record-pointer by remembering the n-th array element which contains the ROWID of the record which is currently being displayed. |
Stored Procedures / Triggers |
The syntax of the Stored Procedures is very different between
Oracle and SQLBase. It is so different that the filtering program |
|
Resultsets from SP | SQLBase can return
one-record-at-a-time with SqlFetchNext() from the resultset of a Stored Procedure. |
Oracle is constrained to returning the whole resultset back to the calling application via arrays. An article in Centura Pro (www.ProPublishing.com, November 1998) suggests a way to get around this problem. |
System Tables | SYSTABLES, SYSCOLUMNS, etc. display the structure of the database in SQLBase | There are scripts (VIEW7.SQL, VIEW8.SQL) that create the views according to Centura's standard system objects, wrapping around Oracle's schema. |
Labels and Comments | In SQLBase' table SYSCOLUMNS, the LABELS and COMMENTS are stored separately. | Oracle only allows
COMMENTS. Usually, this does not create a problem as these two are used for
documentation purpose only. It is more common to document your data-structures
in a Data Modelling tool (like ERwin) and not in the database. |
Execution Plans | The function
SqlGetParameter() is used to get the Execution Plan and Query Cost. |
This will not work with Oracle. Refer to the Oracle documentation on how to create Plan Table, etc. for acquiring such information. |
Update Statistics |
The syntax of this this statement is different on both platforms. |
|
Schema Porting |
One has to create Oracle database which is similar in structure
to the SQLBase database. This can be done by the following methods: |
|
Data Porting | After having
created identical Oracle database, one may want to port existing data from
SQLBase to Oracle. This can be done by the following methods... (a) You can use ODBC related tools mentioned on the "3rd-Party Tools" web page at www.CenturaSoft.com. (b) You can write your own table-copy Centura program. Connect to both the databases via two Sql Handles and copy row-by-row data from one table to the other. (c) Use SQLTalk to UNLOAD DATA. Then use SQLTalk to LOAD SQL into Oracle (this is slow and inefficient). (d) Use SQLTalk to UNLOAD DATA. Then use a filtering program to translate the format of the unloaded data into a format acceptable to SQL*LOADER supplied with the Oracle software and load it. (e) Try replication software packages like Centura SQLBase Exchange. |
On some machines, with
apparently identical DLLs, SQL.INI files, and ODBC setup, trying to establish a
connection results in errors whilst on others, connection is trouble-free. How
should I fix this problem?
This error is generally is related to a path problem determining which DLLs to
load. You might want to examine the modules loaded and where they are
loading from on machines that work and compare to machines that fail. Some
drivers install the ODBC Driver Manager (ODBC32.DLL) anywhere they like.
This results in a mix and match version of the ODBC Engine.
How can I use "begin
transaction" and "commit transaction" inside stored procedures
for MS SQL Server (ver. 6.0 or 6.5)?
If you want to use "begin transaction" and "commit
transaction" (or "rollback transaction") inside stored
procedures for MS SQL Server (ver 6.0 or 6.5) you should turn on AUTOCOMMIT
before stored procedure call:
Call OdrPrepareProc( hSql, 'call myproc', '' )
Call SqlSetParameter( hSql, DBP_AUTOCOMMIT, TRUE, '' )
Call SqlExecute( hSql )
Call SqlSetParameter( hSql, DBP_AUTOCOMMIT, FALSE, '' )
If you do not turn AUTOCOMMIT ON, the Centura SQL/API implicitly adds
"begin tran" before any DML statment and "commit tran" when
SqlImmediate( 'COMMIT' ) is called.
Inside procedure you can check is you already in transaction and return
error:
CREATE PROCEDURE myproc
AS
if @@TRANCOUNT <> 0
begin
return -1 --
error
end
...
begin tran
-- make some DML here
.....
if < ERROR >
rollback tran
else
commit tran
How do I concatenate columns in
SELECT statements? The '||' does not seem to work?
SQL Server uses the '+' sign to concatenate columns as shown in the following
example:
SELECT colA + colB + colC FROM tableD INTO :sVar
I can't get the JDBC drivers to
work. What's wrong?
1) Make sure you're using only the released JDBC driver and not any
.class files of the beta release.
2) Rename the install directory to the following case:
C:\Centura\centura\java\sqlbase\
3) To register the driver you must use the following including case:
"centura.java.sqlbase.SqlbaseDriver"
To test your JDBC connection, use the JDBCTest utility available at
www.intersolv.com.
When trying to access a SQLBase
database from MS Access, I get the message "Table 'TMP%#MAU@' already
exist". Can this be avoided?
Detailed information about this topic is available at
www.centurasoft.com/support/tech_info/bulletins/access~1.html.
I'm using some software to load entries
into SQLBase using ODBC, but it uses quotes around column-identifiers and
therefore SQLBase doesn't recognize them correctly. Is there a
workaround?
When using a product that specifies SQL statements like "INSERT INTO
"INETLOG" ("ClientHost", ..) ...", SQLBase does not
accept the quotes around table and column names. A workaround is to create the
columns and tables with the quotes in the names, like
CREATE TABLE "INETLOG" ( "ClientHost" char(255), ...)
in SQLBase.
I'm using CTD/SQLWindows SQLRouter for
ODBC and the ODBC driver is not listed as a certified driver. What Can I
do?
It is impossible to test all ODBC drivers. If the driver has ODBC 2.0
conformance than it probably should work. If you seem to be having difficulty
with datatypes refer to the online help for information on the DCC.INI.
How do I retrieve information from the
system catalog of a connected ODBC datasource?
This can be done if you hook on directly the the ODBC API. Alternatively, you
can use ODBCTestX.OCX, a freeware ActiveX control that wraps around the ODBC
API. Click to jump to the Downloads
section where it is available.
When trying to connect to an ODBC
datasource, I keep getting error 401 "cannot connect to database".
How can I avoid this?
Centura stores ODBC information in a file called GUPTA.INI. This is supposed to
make connecting to the data source faster. This file is stored in the directory
that contains ODBC32.DLL. It is recommended that you remove the GUPTA.INI
before any connect to an ODBC data source. This will prevent the error as
long as the data source is properly configured in the ODBC Data Source
Administrator. This is caused by changing the location or connect string of an
ODBC data
source.
How can I get around error 1811
"file not found" when trying to connect to an ODBC data
source?
Centura stores ODBC information in a file called GUPTA.INI. This is supposed to
make connecting to the data source faster. This file is stored in the directory
that contains ODBC32.DLL. It is recommended that you remove the GUPTA.INI
before any connect to an ODBC data source. This will prevent the error as
long as the data source is properly configured in the ODBC Data Source
Administrator. This is caused by changing the location or connect string of an
ODBC data
source.
I changed my ODBC data source to use a
different database but Centura keeps connecting to the old database. How do I
fix that?
Centura stores ODBC information in a file called GUPTA.INI. This is supposed to
make connecting to the data source faster. This file is stored in the directory
that contains ODBC32.DLL. It is recommended that you remove the GUPTA.INI
before any connect to an ODBC data source. This will prevent the error as
long as the data source is properly configured in the ODBC Data Source
Administrator. This is caused by changing the location or connect string of an
ODBC data
source.
It seems that this driver does not
support column aliases. Is this going to be fixed?
On some 6.1.x version a statement like 'SELECT NAME AS CUSTNAME FROM
TABLE' always returns the original column name. Even worse, when computed
columns, aggregates, etc. are requested the driver does not return headings at
all. SQLBase 7.0.1 ship with a new ODBC driver, version 3.01.0006, using that
driver version testing show that the following SQL statements work perfectly
fine:
SELECT COMPANY_NAME AS NAME FROM COMPANY;
SELECT INVOICE_NO, SUM(ITEM_PRICE) AS X, AVG(ITEM_PRICE) AS Y FROM INVOICE_ITEM
GROUP BY 1;
SELECT @LEFT(COMPANY_NAME,4) AS SPLITNAME FROM COMPANY;