Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!nntp.coast.net!news-res.gsl.net!news.gsl.net!sgigate.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo
From: pablo@sgi.com (Pablo Sanchez)
Newsgroups: comp.databases.sybase,comp.answers,news.answers
Subject: Sybase FAQ: 1/8 - index
Supersedes: <FAQ.index_833665144@sgi.com>
Followup-To: comp.databases.sybase
Date: 1 Jul 1996 14:29:32 GMT
Organization: Silicon Graphics, Inc. Nederland, CO. USA
Lines: 249
Approved: news-answers-request@MIT.EDU
Message-ID: <FAQ.index_836231368@sgi.com>
Reply-To: pablo@sgi.com
NNTP-Posting-Host: mew.corp.sgi.com
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly
Originator: pablo@mew.corp.sgi.com
Xref: news1.ucsd.edu comp.databases.sybase:29417 comp.answers:15540 news.answers:62008

Archive-name: databases/sybase-faq/part1
URL: http://reality.sgi.com/pablo/Sybase_FAQ



                              SYBASE FAQ INDEX
                                       
   [LINK] _version 2.4_
     _________________________________________________________________
   
  Keyword and Phrase Search
  
   Enter search words/phrases: ____________________ ___ Ignore case?
   Help!
   
     _________________________________________________________________
   
  Index of Sections
     * Section 0: Acknowledgements & What's New in this Release
     * Section 1: SQL Server, isql and bcp
     * Section 2: SQL Hacks/Tricks
     * Section 3: Performance and Tuning
     * Section 4: Contributed Software
     * Section 5: Sybase Technical News
     * Section 6: Web Links
     * Section 7: Miscellany
       
   
     _________________________________________________________________
   
   To get a text version of this FAQ:
   
     ftp://sgigate.sgi.com/pub/Sybase_FAQ/FAQ_txt.Z [320K]
     
   To get the HTML for this FAQ:
   
     ftp://sgigate.sgi.com/pub/Sybase_FAQ/FAQ_html.tar.Z [366K]
     
   
     _________________________________________________________________
   
   
   
  SQL Server, isql and bcp
  
   1.1) Are there alternatives to row at a time processing?
   1.2) How do I start/stop SQL Server when the CPU reboots?
   1.3) What's a natural key?
   1.4) What traceflags are available?
   1.5) How do I use traceflags 5101 and 5102?
   1.6) Is it possible to call a UNIX command from within a stored
   procedure or a trigger?
   1.7) How do I move tempdb off of the master device?
   1.8) How do I hide my password using isql?
   1.9) How do I remove row affected and/or ---- when using isql?
   1.10) Why not create all my columns varchar(255)?
   1.11) What is a SQL Server anyway?
   1.12) How do I bcp null dates?
   1.13) What's the best value for cschedspins?
   1.14) When should I execute an sp_recompile?
   1.15) What are the different types of locks and what do they mean?
   1.16) What's the purpose of using holdlock?
   1.17) What's the difference between an _update in place_ versus a
   _deferred update_? - see Q3.9
   1.18) Can I use a named pipe to bcp/dump data out or in?
   1.19) What's a good example of a transaction?
   1.20) What do all the parameters of a a buildmaster -d -yall mean?
   1.21) What are some of the hidden/trick DBCC commands?
   1.22) How do I set TS Role in order to run DBCC ...?
   1.23) How do I clear a log suspend'ed connection?
   1.24) How do I manually drop a table?
   1.25) How do I correct timeslice -201?
   1.26) How do I pipe the output of one isql to another?
   1.27) How do I turn off marked suspect on my database?
   1.28) The how's and why's on becoming a Certified Sybase Professional
   (CSPDBA)?
   1.29) What is cmaxpktsz good for? see also Q1.20
   1.30) Table partitioning FAQ 1.31) Shrinking varchar(m) to varchar(n)
   
     _________________________________________________________________
   
   
   
  SQL Hacks/Tricks
  
   2.1) Point Characteristic Functions.
   2.2) How to implement if-then-else within a select-clause.
   2.3) How to invert/pivot a table (also known as the Oracle decode
   function).
   2.4) How to pad with leading zeros an int or smallint.
   2.5) Divide by zero and nulls. 2.6) Convert months to financial
   months. 2.7) Hierarchy traversal - BOMs.
     _________________________________________________________________
   
   
   
  Performance and Tuning
  
   3.1) What are the nitty gritty details on Performance and Tuning?
   3.2) What is best way to use temp tables in an OLTP environment?
   3.3) What's the difference between clustered and non-clustered
   indexes?
   3.4) Optimistic versus Pessimistic locking?
   3.5) How do I force an index to be used?
   3.6) Why place tempdb and log on low numbered devices? 
   3.7) Have I configured enough memory for my SQL Server?
   3.8) Why should I use stored procedures?
   3.9) I don't understand showplan's output, please explain.
   3.10) Poor man's sp_sysmon. 3.11) View MRU-LRU procedure cache chain.
   
     _________________________________________________________________
   
   
   
  Contributed Software
  
   4.1) sp_freedevice - lists device, size, used and free.
   4.2) sp_whodo - augments sp_who by including additional columns: cpu,
   I/O...
   4.3) SQL and sh(1)to dynamically generate a dump/load database
   command.
   4.4) SybPerl - Perl interface to Sybase.
   4.5) dbschema.pl - SybPerl script to take a logical snap of a
   database.
   4.6) Sybtcl - TCL interface to Sybase.
   4.7) Augmented system stored procedures.
   4.8) Examples of Open Client and Open Server programs -- see Q6.11.
   4.9) SQL to determine the space used for an index.
   4.10) dsql - an isql alternative with command history - see Q4.13.
   4.11) xsybmon - an X interface to sp_monitor
   4.12) sp_dos - This procedure graphically displays the scope of a
   object
   4.13) sqsh - a superset of dsql with local variables, redirection,
   pipes and all sorts of goodies.
   4.14) sp_getdays - returns days in current month.
   4.15) ddl_insert.pl - creates insert DDL for a table.
   
     _________________________________________________________________
   
   
   
  Sybase Technical News
  
   5.1) Volume 3, Number 2
   5.2) Volume 3, Number 3
   5.3) Volume 4, Number 1
   5.4) Volume 4, Number 2
   5.5) Volume 4, Number 3
   5.6) Volume 4, Number 4
   5.7) Volume 5, Number 1
   5.8) Special Supplement -- Migration to System 11
   
     _________________________________________________________________
   
   
   
  Web Links
  
   6.1) http://sybase.pnl.gov:2080/Sybase/.Sybase.html
   6.2) http://paella.med.yale.edu/topics/database.html
   6.3) http://www.acs.ncsu.edu:80/Sybase
   6.4) http://www.alaska.net/~pacrim/sybase.html
   6.5) Sybase's Replication Server link.
   6.6) Sybase's third-party Applications
   6.7) Sybase's Online Books - sybooks
   6.8) Sybase's third-party bibliography
   6.9) Sybase's White Papers
   6.10) Sybase's Performance Tips
   6.11) Sybase's Open Client, Open Server and other examples
   6.12) Sybase Freeware and Shareware http://www.tiac.net/users/sqltech
   6.13) ftp://sgigate.sgi.com/pub/Sybase_FAQ/Thawleyhndout.ppt.ps.Z
   [3670K]
   
   The mother ship may be reached at http://www.sybase.com
     _________________________________________________________________
   
   
   
  Miscellany
  
   7.1) What can Sybase IQ do for me?
   7.2) Net-review of Sybase books
   
     _________________________________________________________________
   
   
   
    Acknowledgements ...
    
   Thanks to all the folks in comp.databases.sybase who have made this
   FAQ possible. Searching through the archives I found a lot of stuff
   that folks posted to the net, things that were sent to me and things
   that I've come up with. I decided not to give any one particular
   person credit. For instance, the _Net Book Review_ was originally
   written by a kind netter and given to me. Someone else added to it and
   I'm sure that it'll continue to evolve. Rather than having a
   maintenance headache of attributing who did what, let's just say that
   the FAQ is _ours_. Not mine but yours. Deal?
   
   Corny as it may sound, I really enjoy _our_ news group because of the
   low noise to signal ratio and how polite we tend to be with one
   another.
   
   Please mail pablo@sgi.com any changes, comments, complaints and/or
   disagreements and I will respond in due time. Heck I may even fix
   them.
   
   Please abide by the following and include it if redistributed:
   
     _Do not use this information for profit but do_ share it with
     anyone.
     
   
   
   So who is this guy?
     _________________________________________________________________
   
    ... What's New in this Release?
     * Andy Sparrow (andy@ide.com) - index - typo comp.database_s_.sybase
     * Andrew Mackay (slacour@nfld.com) - Q1.1 - improved on set based
       processing algorithm
     * John Bishop (0002030548@mcimail.com) - Q1.30 - partitioning FAQ
       culled from somewhere...
     * Andrew Sigmund (asigmund@redshift.com) - tireless editing of the
       Sybase FAQ
     * Q2.5 - stolen from the net
     * Paul Horn (phorn@stbbs.com) - Q1.4 - added trace flag 299
     * Phil Allen (phila@ozemail.com.au) - Q2.6 - convert months to
       financial months
     * Elton (elton@sybase.com) - Q4.14 - sp_getdays
     * Q1.31 - stolen from the net
     * Scott Gray (gray@voicenet.com) - Q2.7 - hierarchy traversal
     * Q Vincent Yin (umyin@mctrf.mb.ca) - Q4.15 - ddl_insert.pl and
       suggested this section
     * Q3.10 - stolen from the net
     * Q1.4 - cross-referenced Q2.5
     * Richard Cramotte, Jr. (RGCramotte@aol.com) forwarding Bob Munson's
       stored proc - Q4.16 - sp_exam_space
     * David Whitmarsh (djw@accessio.demon.co.uk) - Q3.11 - dbcc procbuf
     * Larry Kagan (kagan_larry@jpmorgan.com) - Q1.20 - corrected typos
     * Q6.13 - Peter Thawley's '96 talk in PostScript rather than
       PowerPoint
     * Edward Barlow (ebarlow@nyc.pipeline.com) - spiffy up Q6.12's
       banner on the index
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@corp.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]

----------------------------------------------------------------------

Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo
From: pablo@sgi.com (Pablo Sanchez)
Newsgroups: comp.databases.sybase,comp.answers,news.answers
Subject: Sybase FAQ: 2/8 - section 1
Supersedes: <FAQ.section_1_833665144@sgi.com>
Followup-To: comp.databases.sybase
Date: 1 Jul 1996 14:29:52 GMT
Organization: Silicon Graphics, Inc. Nederland, CO. USA
Lines: 2769
Approved: news-answers-request@MIT.EDU
Message-ID: <FAQ.section_1_836231368@sgi.com>
References: <FAQ.index_836231368@sgi.com>
Reply-To: pablo@sgi.com
NNTP-Posting-Host: mew.corp.sgi.com
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly
Originator: pablo@mew.corp.sgi.com
Xref: news1.ucsd.edu comp.databases.sybase:29418 comp.answers:15541 news.answers:62009

Archive-name: databases/sybase-faq/part2
URL: http://reality.sgi.com/pablo/Sybase_FAQ



                 Q1.1: ALTERNATIVE TO ROW AT A TIME PROCESSING
                                       
   
     _________________________________________________________________
   
   Someone asked how they could speed up their processing. They were
   batch updating/inserting gobs of information. Their algorithm was
   something as follows:
   
     ... In another case I do:

If exists (select record) then
        update record
else
        insert record

     I'm not sure which wa[y] is faster or if it makes a difference. I am
     doing this for as many as 4000 records at a time (calling a stored
     procedure 4000 times!). I am interesting in knowing any way to
     improve this. The parameter translation alone on the procedure calls
     takes 40 seconds for 4000 records. I am using _exec_ in DB-Lib.
     
     Would RPC or CT-Lib be better/faster?
     
   A netter responded stating that it was faster to ditch their algorithm
   and to apply a set based strategy:
   
     The way to take your approach is to convert the row at a time
     processing (which is more traditional type of thinking) into a batch
     at a time (which is more relational type of thinking). Now I'm not
     trying to insult you to say that you suck or anything like that, we
     just need to dial you in to think in relational terms.
     
     The idea is to do batches (or bundles) of rows rather than
     processing a single one at a time.
     
     So let's take your example (since you didn't give exact values
     [probably out of kindness to save my eyeballs] I'll use your generic
     example to extend what I'm talking about):
     
     Before:

        if exists (select record) then
           update record
        else
           insert record

     
     
     New way:
    1. Load _all_ your rows into a table named _new_stuff_ in a separate
       work database (call it _work_db_) and load it using _bcp_ -- no
       third GL needed.
         1. truncate _new_stuff_ and drop all indexes
         2. sort your data using UNIX sort and sort it by the clustered
            columns
         3. load it using _bcp_
         4. create clustered index using _with sorted_data_ and any
            ancillary non-clustered index.
    2. Assuming that your target table is called _old_stuff_
    3. Do the _update_ in a single batch:

   begin tran

     /* delete any rows in old_stuff which would normally
     ** would have been updated... we'll insert 'em instead!
     ** Essentially, treat the update as a delete/insert.
     */

     delete old_stuff
       from old_stuff,
            new_stuff
      where old_stuff.key = new_stuff.key

    /* insert entire new table:  this adds any rows
    ** that would have been updated before and
    ** inserts the new rows
    */
     insert old_stuff
        select * from new_stuff

   commit tran


     You can do all this _without_ writing 3-GL, using _bcp_ and a shell
     script.
     
     A word of caution: _Since these inserts/updates are batched
     orientated you may blow your log if you attempt to do too many at a
     time. In order to avoid this use the set rowcount_ directive to
     create _bite-size_ chunks.
     
   The original respondent reported a _150%_ reduction in processing
   time.
     _________________________________________________________________

              Q1.2: HOW TO START/STOP SQL SERVER WHEN CPU REBOOTS
                                       
   
     _________________________________________________________________
   
   Below is an example of the various files (on _Irix_) that are needed
   to start/stop a SQL Server. The information can easily be extended to
   any UNIX platform.
   
   The idea is to allow as much flexibility to the two classes of
   administrators that admin the machine:
     * The System Administrator
     * The Database Administrator
       
   Any errors introduced by the DBA will not interfere with the System
   Administrator's job.
   
   With that in mind we have the system startup/shutdown file
   _/etc/init.d/sybase_ invoking a script defined by the DBA:
   _/usr/sybase/sys.config/{start,stop}.sybase_
   
_/etc/init.d/sybase_

   On some operating systems this file must be linked to a corresponding
   entry in _/etc/rc.0_ and _/etc/rc.2_ -- see _rc0(1M)_ and _rc2(1M)_
   

#!/bin/sh
# last modified:  10/17/95, sr.
#
# Make symbolic links so this file will be called during system stop/start.
# ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase
# ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase
# chkconfig -f sybase on

# Sybase System-wide configuration files
CONFIG=/usr/sybase/sys.config

if $IS_ON verbose ; then        # For a verbose startup and shutdown
        ECHO=echo
        VERBOSE=-v
else                            # For a quiet startup and shutdown
        ECHO=:
        VERBOSE=
fi

case "$1" in
'start')
        if $IS_ON sybase; then
                if [ -x $CONFIG/start.sybase ]; then
                   $ECHO "starting Sybase servers"
                   /bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &"
                else
                   <error condition>
                fi
        fi
        ;;

'stop')
        if $IS_ON sybase; then
                if [ -x $CONFIG/stop.sybase ]; then
                   $ECHO "stopping Sybase servers"
                   /bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &"
                else
                   <error condition>
                fi
        fi
        ;;

*)
        echo "usage: $0 {start|stop}"
        ;;
esac

_/usr/sybase/sys.config/{start,stop}.sybase_

  start.sybase
  

#!/bin/sh -a

#
# Script to start sybase
#
# NOTE: different versions of sybase exist under /usr/sybase/{version}
#

# Determine if we need to spew our output
if [ "$1" != "spew" ] ; then
   OUTPUT=">/dev/null 2>&1"
else
   OUTPUT=""
fi

# 10.0.2 servers
HOME=/usr/sybase/10.0.2
cd $HOME

# Start the backup server
eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT

# Start the dataservers
# Wait two seconds between starts to minimize trauma to CPU server
eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT
sleep 2
eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT

exit 0

  stop.sybase
  

#!/bin/sh

#
# Script to stop sybase
#

# Determine if we need to spew our output
if [ -z "$1" ] ; then
   OUTPUT=">/dev/null 2>&1"
else
   OUTPUT="-v"
fi

eval killall -9 $OUTPUT dataserver backupserver server_check sleep sybmultbuf

exit 0

   
     _________________________________________________________________

                          Q1.3: WHAT'S A NATURAL KEY?
                                       
   
     _________________________________________________________________
   
   Let me think back to my database class... okay, I can't think that far
   so I'll paraphrase... essentially, a _natural key_ is a key for a
   given table that uniquely identifies the row. It's natural in the
   sense that it follows the business or real world need.
   
   For example, assume that social security numbers are unique (I believe
   it is strived to be unique but it's not always the case), then if you
   had the following employee table:
   

employee:

        ssn     char(09)
        f_name  char(20)
        l_name  char(20)
        title   char(03)

   Then a natural key would be _ssn_. If the combination of __name_ and
   _l_name_ were unique at this company, then another _natural key_ would
   be _f_name, l_name_. As a matter of fact, you can have many _natural
   keys_ in a given table but in practice what one does is build a
   surrogate (or artificial) key.
   
   The surrogate key is guaranteed to be unique because (wait, get back,
   here it goes again) it's typically a monotonically increasing value.
   Okay, my mathematician wife would be proud of me... really all it
   means is that the key is increasing linearly: i+1
   
   The reason one uses a surrogate key is because your joins will be
   faster.
   
   If we extended our employee table to have a surrogate key:
   

employee:

        id      identity
        ssn     char(09)
        f_name  char(20)
        l_name  char(20)
        title   char(03)

   Then instead of doing the following:
   

   where a.f_name = b.f_name
     and a.l_name = a.l_name

   we'd do this:
   

   where a.id = b.id

   We can build indexes on these keys and since Sybase's atomic storage
   unit is 2K, we can stash more values per 2K page with smaller indexes
   thus giving us better performance (imagine the key being 40 bytes
   versus being say 4 bytes... how many 40 byte values can you stash in a
   2K page versus a 4 byte value? -- and how much wood could a wood chuck
   chuck, if a wood chuck could chuck wood?)
   
  Does it have anything to do with natural joins?
  
   
   
   Um, not really... from "A Guide to Sybase..", McGovern and Date, p.
   112:
   
     The equi-join by definition must produce a result containing two
     identical columns. If one of those two columns is eliminated, what
     is left is called the natural join.
     
   
     _________________________________________________________________
Q1.4: Trace Flag Definitions

----------------------------------------------------------------------------

To activate trace flags, add them to the RUN_* script. The following example
is using the 1611 and 260 trace flags.

     Use of these traceflags is not recommended by Sybase. Please use
     at your own risk.

% cd ~sybase/install
% cat RUN_BLAND
#!/bin/sh
#
# SQL Server Information:
#  name:                          BLAND
#  master device:                 /usr/sybase/dbf/BLAND/master.dat
#  master device size:            25600
#  errorlog:                      /usr/sybase/install/errorlog_BLAND
#  interfaces:                    /usr/sybase
#
/usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \
-sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \
-T1611 -T260

----------------------------------------------------------------------------

                                 Trace Flags

 Flag                              Description

 200  Displays messages about the before image of the query-tree.

 201  Displays messages about the after image of the query-tree.

 241  Compress all query-trees whenever the SQL dataserver is started.

      Reduce TDS (Tabular Data Stream) overhead in stored procedures. This
 260  flag should be on for all SQL dataservers unless you are using
      SQLDebug.

 299  This trace flag instructs the dataserver to not recompile a child
      stored procedure that inherits a temp table from a parent procedure.

 302  Print information about the optimizer's index selection.

 310  Print information about the optimizer's join selection.

 320  Turn off the join order heuristic.

 324  Turn off the like optimization for ad-hoc queries using
      @local_variables.

 602  Prints out diagnostic information for deadlock prevention.

 603  Prints out diagnostic information when avoiding deadlock.

 699  Turn off transaction logging for the entire SQL dataserver.

 1204 Send deadlock detection to the errorlog.

 1603 Use standard disk I/O (i.e. turn off asynchronous I/O).

      Create a debug engine start file. This allows you to start up a
      debug engine which can access the server's shared memory for running
      diagnostics. I'm not sure how useful this is in a production
 1606 environment as the debugger often brings down the server. I'm not
      sure if Sybase have ported the debug stuff to 10/11. Like most of
      their debug tools it started off quite strongly but was never
      developed.

      Startup only engine 0; use dbcc engine(online) to incrementally
 1608 bring up additional engines until the maximum number of configured
      engines.

 1610 Boot the SQL dataserver with TCP_NODELAY enabled.

 1611 If possible, pin shared memory -- check errorlog for
      success/failure.

 1613 Set affinity of the SQL dataserver engine's onto particular CPUs --
      usually pins engine 0 to processor 0, engine 1 to processor 1...

 1615 SGI only: turn on recoverability to filesystem devices.

 2512 Prevent dbcc from checking syslogs. Useful when you are constantly
      getting spurious allocation errors.

 3300 Display each log record that is being processed during recovery. You
      may wish to redirect stdout because it can be a lot of information.

 3604 Send dbcc output to screen.

 3605 Send dbcc output to errorlog.

 3607 Do not recover any database, clear tempdb, or start up checkpoint
      process.

 3608 Recover master only. Do not clear tempdb or start up checkpoint
      process.

 3609 Recover all databases. Do not clear tempdb or start up checkpoint
      process.

 3610 Pre-System 10 behavior: divide by zero to result in NULL instead of
      error - also see Q2.5.

 3620 Do not kill infected processes.

 4013 Place a record in the errorlog for each login to the dataserver.

      Forces all I/O requests to go thru engine 0. This removes the
 5101 contention between processors but could create a bottleneck if
      engine 0 becomes busy with non-I/O tasks. For more
      information...5101/5102.

 5102 Prevents engine 0 from running any non-affinitied tasks. For more
      information...5101/5102.

 7103 Disable table lock promotion for text columns.

 8203 Display statement and transaction locks on a deadlock error.

                      Q1.5: TRACE FLAGS -- 5101 AND 5102
                                       
   
     _________________________________________________________________
   
  5101
  
   Normally, each engine issues and checks for its own Disk I/O on behalf
   of the tasks it runs. In completely symmetric operating systems, this
   behavior provides maximum I/O throughput for SQL Server. Some
   operating systems are not completely symmetic in their Disk I/O
   routines. For these environments, the server can be booted with the
   5101 trace flag. While tasks still request disk I/O from any engine,
   the actual request to/from the OS is performed by engine 0. The
   performance benefit comes from the reduced or eliminated contention on
   the locking mechanism inside the OS kernel. To enable I/O affinity to
   engine 0, start SQL Server with the 5101 Trace Flag.
   
   Your errorlog will indicate the use of this option with the message:

        Disk I/O affinitied to engine: 0

   This trace flag only provides performance gains for servers with 3 or
   more dataserver engines configured and being significantly utilized.
   
   _Use of this trace flag with fully symmetric operating systems will
   degrade performance!_
   
  5102
  
    The 5102 trace flag prevents engine 0 from running any non-affinitied
   tasks. Normally, this forces engine 0 to perform Network I/O only.
   Applications with heavy result set requirements (either large results
   or many connections issuing short, fast requests) may benefit. This
   effectively eliminates the normal latency for engine 0 to complete
   running its user thread before it issues the network I/O to the
   underlying network transport driver. If used in conjuction with the
   5101 trace flag, engine 0 would perform all Disk I/O and Network I/O.
   For environments with heavy disk and network I/O, engine 0 could
   easily saturate when only the 5101 flag is in use. This flag allows
   engine 0 to concentrate on I/O by not allowing it to run user tasks.
   To force task affinity off engine 0, start SQL Server with the 5102
   Trace Flag.
   
   Your errorlog will indicate the use of this option with the message:

        I/O only enabled for engine: 0

   
     _________________________________________________________________
   
   _Warning: Not supported by Sybase. Provided here for your enjoyment._

          Q1.6: CALLING UNIX FROM A _TRIGGER_ OR A _STORED PROCEDURE_
                                       
   
     _________________________________________________________________
   
   Periodically folks ask if it's possible to make a system command or
   call a UNIX process from a Trigger or a Stored Procedure.
   
   This is _not_ possible but...
   
   not from within SQL Server--it has no extension capabilities to allow
   you to go outside to the operating system or run your own custom C
   routine as part of the server. The typical ways people have
   implemented this capability is:
    1. Buy Open Server and bind in your own custom stuff (calls to
       system() or custom C code) and make Sybase RPC calls to it.
    2. Have a dedicated client application running on the server box
       which regularly scans a table and executes the commands written
       into it (and tucks the results into another table which can have a
       trigger on it to gather results...). It is somewhat tricky but
       cheaper than option 1...
       
   I know I've (gently) hit Sybase over the years to allow me to extend
   the SQL Server product by "patching" in my own routines and functions,
   but they don't see a market need since these two options exist...Some
   days you just have to eat the cake that you've got...
     _________________________________________________________________

            Q1.7: HOW DO I MOVE _TEMPDB_ OFF OF THE MASTER DEVICE?
                                       
   
     _________________________________________________________________
   
   Typically, folks place _tempdb_ on a cooked (regular - as opposed to a
   raw device) file system to gain any write buffering that the Operating
   System may offer. However, when the SQL Server creates the default
   _tempdb_ size, the first 2MB are on the _master device_.
   
   The following is a method to move _tempdb_ completely off of the
   _master device_:
   
     _It is recommended doing the following within a transaction. _
    1. Dump the master database.
    2. Reboot Server in single user mode.
    3. Create a dummy database on a file system. The size of the dummy db
       depends on whether or not you have altered _tempdb_. If _tempdb_
       has not been altered then just create 2MB.
    4. Delete row(s) in _sysusages_ for _tempdb_.
    5. Delete row(s) in _sysdatabses_ for _tempdb_
    6. Make dummy database to be the _tempdb_ by doing as follows:

 1> update sysusages
 2> set dbid = 2 where dbid = db_id(_dummy database_)
 3> go
 1> update sysdatabases set dbid = 2, name = "tempdb"
 2> where dbid = db_id(_dummy database name_)
 3> go

    7. Turn on _select into/bulkcopy_ option on the new _tempdb_.
    8. Reboot Server (not in single user mode).
       
  A quick alternative
  
   Another method is to drop the segments, the only drawback is that the
   2MB in master are wasted (but so what!):
    1. Alter tempdb on another device:

 1> alter database tempdb on ...
 2> go
    2. Use the tempdb:

 1> use tempdb
 2> go
    3. Drop the segments:

 1> sp_dropsegment "default", tempdb, master
 2> go
 1> sp_dropsegment logsegment, tempdb, master
 2> go

   
     _________________________________________________________________

                     Q1.8: HIDING YOUR PASSWORD TO _ISQL_
                                       
   
     _________________________________________________________________
   
   Here are a menagerie (I've always wanted to use that word) of
   different methods to hide your password. Pick and choose whichever
   fits your environment best:
   
Single SQL Server on host

  Script #1
  
   Assuming that you are using bourne shell _sh(1)_ as your scripting
   language you can put the password in a file and substitute the file
   where the password is needed.

#!/bin/sh

# invoke say ISQL or something....
( cat $HOME/dba/_password_file_
  cat

  Script #2
  

#!/bin/sh
umask 077
cat

  Script #3
  

#!/bin/sh
umask 077
cat

  Script #3
  

#!/bin/sh
umask 077
isql -Umyuserid -Smyserver

  Script #4
  

#!/bin/sh
umask 077
isql -Umyuserid -Smyserver

  Script #5
  

#!/bin/sh
echo 'mypassword
use mydb
go
sp_who
go' | isql -Umyuserid -Smyserver


  Script #6
  

#!/bin/sh
echo "`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go" | isql -Umyuserid -Smyserver


  Script #7
  
  Apparently solaris precludes hiding passwords.  While
_isql_
tries, solaris is too smart for it and puts them back on the command
line.  We just came up with a clever option for this:

     isql -w000000000000000000000000000000000001 -Pmypass ... Apparently
     solaris' _ps(1)_ is too brain dead to to wrap and will only show the
     first 40 characters or so of the command so you need to guarantee
     that the password is after that. I think this is 100%, but we will
     be fooling around with it a bit more.
     
Multiple SQL Servers on host

     Again, assuming that you are using bourne shell as your scripting
     language, you can do the following:
    1. Create a _global file_. This file will contain passwords, generic
       functions, master device for the respective DSQUERY.
    2. In the actual scripts, source in the _global file_.
       
  _Global File_


SYBASE=/usr/sybase

my_password()
{
   case $1 in
      SERVER_1)  PASSWD="this";;
      SERVER_2)  PASSWD="is";;
      SERVER_3)  PASSWD="bogus;;
      *) return 1;;
   esac

   return 0
}

  Generic Script

#!/bin/sh -a

#
# Use "-a" for auto-export of variables
#

# "dot" the file - equivalent to csh() "source" command
. $HOME/dba/_global_file_

DSQUERY=$1

# Determine the password:  sets PASSWD
my_password $DSQUERY
if [ $? -ne 0 ] ; then # error!
   echo ""
   exit 1
fi

# invoke say ISQL or something....
echo "$PASSWD
dbcc ...
go" | $SYBASE/bin/isql -U sa -S $DSQUERY -w1000

     
       _______________________________________________________________

                 Q1.9: HOW TO REMOVE _ROW AFFECTED_ AND _---_
                                       
   
     _________________________________________________________________
   
   If you pipe the output of _isql_ then you can use _sed(1)_ to remove
   this extraneous output:
   

  echo "$PASSWD
        sp_who
        go" | isql -U sa -S MY_SERVER | sed -e '/affected/d' -e '/---/d'

   If you simply wish to eliminate the _row affected_ line use the _set
   nocount on_ switch.
     _________________________________________________________________

                    Q1.10: WHY NOT MAX OUT ALL MY COLUMNS?
                                       
   
     _________________________________________________________________
   
   People occasionally ask the following valid question:
   
     Suppose I have varying lengths of character strings none of which
     should exceed 50 characters.
     
     _Is there any advantage of last_name varchar(50) over this last_name
     varchar(255)?_
     
     That is, for simplicity, can I just define all my varying strings to
     be varchar(255) without even thinking about how long they may
     actually be? Is there any storage or performance penalty for this.
     
   There is no performance penalty by doing this but as another netter
   pointed out:
   
     If you want to define indexes on these fields, then you should
     specify the smallest size because the sum of the maximal lengths of
     the fields in the index can't be greater than 256 bytes.
     
   and someone else wrote in saying:
   
     Your data structures should match the business requirements. This
     way the data structure themselves becomes a data dictionary for
     others to model their applications (report generation and the like).
     
   
     _________________________________________________________________
Q1.11: What is a SQL Server?

----------------------------------------------------------------------------

Overview

Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System
10 has some significant improvements over Sybase 4.x product line. Namely:

   * the ability to allocate more memory to the dataserver without degrading
     its performance.
   * the ability to have more than one database engine to take advantage of
     multi-processor cpu machines.
   * a minimally intrusive process to perform database and transaction
     dumps.

Background and More Terminology

A SQL Server is simply a Unix process. It is also known as the database
engine. It has multiple threads to handle asynchronous I/O and other tasks.
The number of threads spawned is the number of engines (more on this in a
second) times five. This is the current implementation of Sybase System 10,
10.0.1 and 10.0.2 on IRIX 5.3.

Each SQL dataserver allocates the following resources from a host machine:

   * memory and
   * raw partition space.

Each SQL dataserver can have up to 255 databases. In most implementations
the number of databases is limited to what seems reasonable based on the
load on the SQL dataserver. That is, it would be impractical to house all of
a large company's databases under one SQL dataserver because the SQL
dataserver (a Unix process) will become overloaded.

That's where the DBA's experience comes in with interrogation of the user
community to determine how much activity is going to result on a given
database or databases and from that we determine whether to create a new SQL
Server or to house the new database under an existing SQL Server. We do make
mistakes (and businesses grow) and have to move databases from one SQL
Server to another. And at times SQL Servers need to move from one CPU server
to another.

With Sybase System 10, each SQL Server can be configured to have more than
one engine (each engine is again a Unix process). There's one primary engine
that is the master engine and the rest of the engines are subordinates. They
are assigned tasks by the master.

Interprocess communication among all these engines is accomplished with
shared memory.

     Some times when a DBA issues a Unix kill command to extinguish a
     maverick SQL Server, the subordinate engines are forgotten. This
     leaves the shared memory allocated and eventually we may get in to
     situations where swapping occurs because this memory is locked. To
     find engines that belong to no master SQL Server, simple look for
     engines owned by /etc/init (process id 1). These engines can be
     killed -- this is just FYI and is a DBA duty.

Before presenting an example of a SQL Server, some other topics should be
covered.

Connections

A SQL Server has connections to it. A connection can be viewed as a user
login but it's not necessarily so. That is, a client (a user) can spark up
multiple instances of their application and each client establishes its own
connection to the SQL dataserver. Some clients may require two or more per
invocation. So typically DBA's are only concerned with the number of
connections because the number of users typically does not provide
sufficient information for us to do our job.

     Connections take up SQL Server resources, namely memory, leaving
     less memory for the SQL Servers' available cache.

SQL Server Buffer Cache

In Sybase 4.0.1 there was a limit to the amount of memory that could be
allocated to a SQL Server. It was around 80MB, with 40MB being the typical
max. This was due to internal implementations of Sybase's data structures.

With Sybase System 10 there really is no limit. For instance, we have a SQL
Server cranked up to 300MB.

The memory in a SQL Server is primarily used to cache data pages from disk.
Consider that the SQL Server is a light weight Operating System: handling
user (connections), allocating memory to users, keeping track of which data
pages need to be flushed to disk and the sort. Very sophisticated and
complex. Obviously if a data page is found in memory it's much faster to
retrieve than going out to disk.

Each connection takes away a little bit from the available memory that is
used to cache disk pages. Upon startup, the SQL Server pre-allocates the
memory that is needed for each connection so it's not prudent to configure
500 connections when only 300 are needed. We'd waste 200 connections and the
memory associated with that. On the other hand, it is also imprudent to
under configure the number of connections; users have a way of soaking up a
resource (like a SQL Server) and if users have all the connections a DBA
cannot get into the server to allocate more connections.

One of the neat things about a SQL Server is that it reaches (just like a
Unix process) a working set. That is, upon startup it'll do a lot of
physical I/O's to seed its cache, to get lookup information for typical
transactions and the like. So initially, the first users have heavy hits
because their requests have to be performed as a physical I/O. Subsequent
transactions have less physical I/O and more logical I/O's. Logical I/O is
an I/O that is satisfied in the SQL Servers' buffer cache. Obviously, this
is the preferred condition.

DSS vs OLTP

We throw around terms like everyone is supposed to know this high tech
lingo. The problem is that they are two different animals that require a SQL
Server to be tuned accordingly for each.

Well, here's the low down.

DSS
     Decision Support System
OLTP
     Online Transaction Processing

What do these mean? OLTP applications are those that have very short orders
of work for each connection: fetch this row and with the results of it
update one or two other rows. Basically, small number of rows affected per
transaction in rapid sucession, with no significant wait times between
operations in a transaction.

DSS is the lumbering elephant in the database world (unless you do some
tricks... out of this scope). DSS requires a user to comb through gobs of
data to aggregate some values. So the transactions typically involve
thousands of rows. Big difference than OLTP.

We never want to have DSS and OLTP on the same SQL Server because the nature
of OLTP is to grab things quickly but the nature of DSS is to stick around
for a long time reading tons of information and summarizing the results.

What a DSS application does is flush out the SQL Server's data page cache
because of the tremendous amount of I/O's. This is obviously very bad for
OTLP applications because the small transactions are now hurt by this
trauma. When it was only OLTP a great percentage of I/O was logical
(satisfied in the cache); now transactions must perform physical I/O.

That's why it's important in Sybase not to mix DSS and OLTP, at least until
System 11 arrives.

     Sybase System 11 release will allow for the mixing of OLTP and DSS
     by allowing the DBA to partition (and name) the SQL Server's
     buffer cache and assign it to different databases and/or objects.
     The idea is to allow DSS to only affect their pool of memory and
     thus allowing OLTP to maintain its working set of memory.

Asynchronous I/O

Why async I/O? The idea is in a typical online transaction processing (OLTP)
application you have many connections (over 200 connections) and short
transactions: get this row, update that row. These transactions are
typically spread across different tables of the databases. The SQL Server
can then perform each one of these asynchronously without having to wait for
others to finish. Hence the importance of having async I/O fixed on our
platform.

Engines

Sybase System 10 can have more than one engine (as stated above). Sybase has
trace flags to pin the engines to a given CPU processor but we typically
don't do this. It appears that the master engine goes to processor 0 and
subsequent subordinates to the next processor.

Currently, Sybase does not scale linearly. That is, five engines doesn't
make Sybase perform five times as fast however we do max out with four
engines. After that, performs starts to degrade. This is supposed to be
fixed with Sybase System 11.

Putting Everything Together

As previously mentioned, a SQL Server is a collection of databases with
connections (that are the users) to apply and retrieve information to and
from these containers of information (databases).

The SQL Server is built and its master device is typically built over a
medium sized (50MB) raw partition. The tempdb is built over a cooked
(regular - as opposed to a raw device) file system to realize any
performance gains by buffered writes. The databases themselves are built
over the raw logical devices to ensure their integrity.

Physical and Logical Devices

Sybase likes to live in its own little world. This shields the DBA from the
outside world known as Unix (or VMS). However, it needs to have a conduit to
the outside world and this is accomplished via devices.

All physical devices are mapped to logical devices. That is, given a
physical device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it
is mapped by the DBA to a logical device. Depending on the type of the
device, it is allocated, by the DBA, to the appropriate place (vague
enough?).

Okay, let's try and clear this up...

Dump Device

The DBA may decide to create a device for dumping the database nightly. The
DBA needs to create a dump device.

We'll call that logically in the database datadump_for_my_db but we'll map
it to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write
a script that connects to the SQL Server and issues a command like this:

     dump database my_stinking_db to datadump_for_my_db
     go

and the backupserver (out of this scope) takes the contents of
my_stinking_db and writes it out to the disk file /lv1/dumps/in_your_eye.dat

That's a dump device. The thing is that it's not preallocated. This special
device is simply a window to the operating system.

Data and Log Devices

Ah, now we are getting into the world of pre-allocation. Databases are built
over raw partitions. The reason for this is because Sybase needs to be
guaranteed that all its writes complete successfully. Otherwise, if it
posted to a file system buffer (as in a cooked file system) and the machine
crashed, as far as Sybase is concerned the write was committed. It was not,
however, and integrity of the database was lost. That is why Sybase needs
raw partitions. But back to the matter at hand...

When building a new SQL Server, the DBA determines how much space they'll
need for all the databases that will be housed in this SQL Server.

Each production database is composed of data and log.

The data is where the actual information resides. The log are where the
changes are kept. That is, every row that is updated/deleted/inserted gets
placed into the log portion then applied to the data portion of the
database.

     That's why DBA strives to place the raw devices for logs on
     separate disks because everything has to single thread through the
     log.

A transaction is a collection of SQL statements (insert/delete/update) that
are grouped together to form a single unit of work. Typically they map very
closely to the business.

I'll quote the Sybase SQL Server System Administration guide on the role of
the log:

     The transaction log is a write-ahead log. When a user issues a
     statement that would modify the database, SQL Server automatically
     writes the changes to the log. After all changes for a statement
     have been recorded in the log, they are written to an in-cache
     copy of the data page. The data page remains in cache until the
     memory is needed for another database page. At that time, it is
     written to disk. If any statement in a transaction fails to
     complete, SQL Server reverses all changes made by the transaction.
     SQL Server writes an "end transaction" record to the log at the
     end of each transaction, recording the status (success or failure)
     of the transaction

As such, the log will grow as user connections affect changes to the
database. The need arises to then clear out the log of all transactions that
have been flushed to disk. This is performed by issuing the following
command:

     dump transaction my_stinking_db to logdump_for_my_db
     go

The SQL Server will write to the dumpdevice all transactions that have been
committed to disk and will delete the entries from its copy, thus freeing up
space in the log. Dumping of the transaction logs is accomplished via cron.
We schedule the heavily hit databases every 20 minutes during peak times.

     A single user can fill up the log by having begin transaction with
     no corresponding commit/rollback transaction. This is because all
     their changes are being applied to the log as an open-ended
     transaction, which is never closed. This open-ended transaction
     cannot be flushed from the log, and therefore grows until it
     occupies all of the free space on the log device.

And the way we dump it is with a dump device. :-)

An Example

If the DBA has four databases to plop on this SQL Server and they need a
total of 800MB of data and 100MB of log (because that's what really matters
to us), then they'd probably do something like this:

  1. allocate sufficient raw devices to cover the data portion of all the
     databases
  2. allocate sufficient raw devices to cover the log portion of all the
     databases
  3. start allocating the databases to the devices.

For example, assuming the following database requirements:

                                  Database
                                Requirements

                                DB Data  Log

                                a  300   30

                                b  400   40

                                c  100   10

and the following devices:
                                   Devices

                      Logical          Physical      Size

                   dks3d1s2_data  /dev/rdsk/dks3d1s2 500

                   dks4d1s2_data  /dev/rdsk/dks4d1s2 500

                   dks5d1s0_log   /dev/rdsk/dks5d1s0 200

then the DBA may elect to create the databases as follows:

     create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30
     create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40
     create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on
     dks5d1s0_log = 10

Some of the devices will have extra space available because out database
allocations didn't use up all the space. That's fine because it can be used
for future growth. While the Sybase SQL Server is running, no other Sybase
SQL Server can re-allocate these physical devices.

TempDB

TempDB is simply a scratch pad database. It gets recreated when a SQL Server
is rebooted. The information held in this database is temporary data. A
query may build a temporary table to assist it; the Sybase optimizer may
decide to create a temporary table to assist itself.

Since this is an area of constant activity we create this database over a
cooked file system which has historically proven to have better performance
than raw - due to the buffered writes provided by the Operating System.

Port Numbers

When creating a new SQL Server, we allocate a port to it (currently, DBA
reserves ports 1500 through 1899 for its use). We then map a host name to
the different ports: hera, fddi-hera and so forth. We can actually have more
than one port number for a SQL Server but we typically don't do this.
----------------------------------------------------------------------------

                        Q1.12: HOW DO I BCP NULL DATES?
                                       
   
     _________________________________________________________________
   
   As long as there is _nothing_ between the field delimiters in your
   data, a null will be entered. If there's a space, the value will be
   Jan 1, 1900.
   
   You can use _sed(1)_ to squeeze blanks out of fields:
   

 sed -e 's/|[ ]*|/||/g' old_file > new_file

   
     _________________________________________________________________

                Q1.13: WHAT'S THE BEST VALUE FOR _CSCHEDSPINS_?
                                       
   
     _________________________________________________________________
   
   It is crucial to understand that _cschedspins_ is a tunable parameter
   (recommended values being between 1-2000) and the optimum value is
   completely dependent on the customer's environment. _cschedspins_ is
   used by the scheduler only when it finds that there are no runnable
   tasks. If there are no runnable tasks, the scheduler has two options:
    1. Let the engine go to sleep (which is done by an OS call) for a
       specified interval or until an event happens. This option assumes
       that tasks won't become runnable because of tasks executing on
       other engines. This would happen when the tasks are waiting for
       I/O more than any other resource such as locks. Which means that
       we could free up the CPU resource (by going to sleep) and let the
       system use it to expedite completion of system tasks including
       I/O.
    2. Go and look for a ready task again. This option assumes that a
       task would become runnable in the near term and so incurring the
       extra cost of an OS context switch through the OS sleep/wakeup
       mechanism is unacceptable. This scenario assumes that tasks are
       waiting on resources such as locks, which could free up because of
       tasks executing on other engines, more than they wait for I/O.
       
   _cschedspins_ controls how many times we would choose option 2 before
   choosing option 1. Setting _cschedspins_ low favors option 1 and
   setting it high favors option 2. Since an I/O intensive task mix fits
   in with option 1, setting _cschedspins_ low may be more beneficial.
   Similarly since a CPU intensive job mix favors option 2, setting
   _cschedspins_ high may be beneficial.
   
   The consensus is that a single cpu server should have _cschedspins_
   set to 1. However, I strongly recommend that users carefully test
   values for _cschedspins_ and monitor the results closely. I have seen
   more than one site that has shot themselves in the foot so to speak
   due to changing this parameter in production without a good
   understanding of their environment.
     _________________________________________________________________

                Q1.14: WHEN SHOULD I EXECUTE AN _SP_RECOMPILE?_
                                       
   
     _________________________________________________________________
   
   An _sp_recompile_ should be issued any time a new index is added or an
   update statistics. Dropping an index will cause an automatic recompile
   of all objects that are dependent on the table.
   
   The _sp_recompile_ command simply increments the _schemacnt_ counter
   for the given table. All dependent object counter's are checked
   against this counter and if they are different the SQL Server
   recompiles the object.
     _________________________________________________________________

                 Q1.15: WHAT ARE THE DIFFERENT TYPES OF LOCKS?
                                       
   
     _________________________________________________________________
   
   First of, just to get it out of the way, there is no method to perform
   row level locking. If you think you need row level locking, you
   probably aren't thinking set based -- see Q1.1 for set processing.
   
   The SQL Server uses locking in order to ensure that sanity of your
   queries. Without locking there is no way to ensure the integrity of
   your operation. Imagine a transaction that debited one account and
   credited another. If the transaction didn't lock out readers/writers
   then someone can potentially see erroneous data.
   
   Essentially, the SQL Server attempts to use the least intrusive lock
   possible, page lock, to satisfy a request. If it reaches around 200
   page locks, then it escalates the lock to a table lock and releases
   all page locks thus performing the task more efficiently.
   
   There are three types of locks:
     * page locks
     * table locks
     * demand locks
       
Page Locks

   There are three types of page locks:
     * shared
     * exclusive
     * update
       
  shared
  
   These locks are requested and used by readers of information. More
   than one connection can hold a shared lock on a data page.
   
   This allows for multiple readers.
   
  exclusive
  
   The SQL Server uses exclusive locks when data is to be modified. Only
   _one_ connection may have an exclusive lock on a given data page. If a
   table is large enough and the data is spread sufficiently, more than
   one connection may update different data pages of a given table
   simultaneously.
   
  update
  
   A update lock is placed during a _delete_ or an _update_ while the SQL
   Server is hunting for the pages to be altered. While an update lock is
   in place, there can be shared locks thus allowing for higher
   throughput.
   
   The update lock(s) are promoted to exclusive locks once the SQL Server
   is ready to perform the _delete/update_.
   
Table Locks

   There are three types of table locks:
     * intent
     * shared
     * exclusive
       
  intent
  
   Intent locks indicate the intention to acquire a shared or exclusive
   lock on a data page. Intent locks are used to prevent other
   transactions from acquiring shared or exclusive locks on the given
   page.
   
  shared
  
   This is similar to a page level shared lock but it affects the entire
   table. This lock is typically applied during the creation of a
   non-clustered index.
   
  exclusive
  
   This is similar to a page level exclusive lock but it affects the
   entire table. If an _update_ or _delete_ affects the entire table, an
   exclusive table lock is generated. Also, during the creation of a
   clustered index an exclusive lock is generated.
   
Demand Locks

   A demand lock prevents further shared locks from being set. The SQL
   Server sets a demand lock to indicate that a transaction is next to
   lock a table or a page.
   
   This avoids indefinite postponement if there was a flurry of readers
   when a writer wished to make a change.
     _________________________________________________________________

                Q1.16: WHAT'S THE PURPOSE OF USING _HOLDLOCK_?
                                       
   
     _________________________________________________________________
   
   All _select/readtext_ statements acquire shared locks (see Q1.15) to
   retrieve their information. After the information is retrieved, the
   shared lock(s) is/are released.
   
   The _holdlock_ option is used within _transactions_ so that after the
   _select/readtext_ statement the locks are held until the end of the
   transaction:
     * commit transaction
     * rollback transaction
       
   If the _holdlock_ is not used within a transaction, the shared locks
   are released.
     _________________________________________________________________

          Q1.18: CAN I USE A NAMED PIPE TO _BCP/DUMP_ DATA OUT OR IN?
                                       
   _System 10 and above._
     _________________________________________________________________
   
   If you would like to _bcp_ copy from one table to a named pipe and
   compress:
    1. %mknod bcp.pipe p
    2. %compress sysobjects.Z &
    3. %bcp master..sysobjects out bcp.pipe -c -U .. > bcp.pipe
    4. Use _ps(1)_ to determine when the _compress_ finishes.
       
   To bcp _from_ my1db..dummy_table_1 _to_ my2db..dummy_table_2:
    1. %mknod bcp.pipe p
    2. %bcp my2db..dummy_table_2 in bcp.pipe -c -U .. &
       
     To avoid confusion between the above _bcp_ and the next, you may
     choose to either use a separate window or redirect the output to a
     file.
    3. %bcp my1db..dummy_table_1 out bcp.pipe -c -U ..
       
   
     _________________________________________________________________

                Q1.19: WHAT'S A GOOD EXAMPLE OF A TRANSACTION?
                                       
   
     _________________________________________________________________
   
     This answer is geared for Online Transaction Processing (OTLP)
     applications.
     
   To gain maximum throughput all your transactions should be in stored
   procedures - see Q3.8. The transactions within each stored procedure
   should be short and simple. All validation should be done outside of
   the transaction and only the modification to the database should be
   done within the transaction. Also, don't forget to name the
   transaction for _sp_whodo_ - see Q4.2.
   
   The following is an example of a _good_ transaction:
   

/* perform validation */
select ...
if ... /* error */
   /* give error message */
else   /* proceed */
   begin
      begin transaction acct_addition
      update ...
      insert ...
      commit transaction acct_addition
   end

   
   
   The following is an example of a _bad_ transaction:
   

begin transaction poor_us
update X ....

select ...
if ... /* error */
   /* give error message */
else   /* proceed */
   begin
      update ...
      insert ...
   end
commit transaction poor_us

   This is bad because:
     * the first update on table X is held throughout the transaction.
       The idea with OLTP is to get in and out _fast_.
     * If an error message is presented to the end user and we await
       their response, we'll maintain the lock on table X until the user
       presses return. If the user is out in the can we can wait for
       hours.
       
   
     _________________________________________________________________

                 Q1.20: BUILDMASTER CONFIGURATION DEFINITIONS
                                       
   
     _________________________________________________________________
   
     _Attention!_ Please notice, be very careful with these parameters.
     Use only at your own risk. Be sure to have a copy of the original
     parameters. Be sure to have a dump of all dbs (include master)
     handy.
     
   
     _________________________________________________________________
   
   
   
   The following is a list of configuration parameters and their effect
   on the SQL Server. Changes to these parameters can affect performance
   of the server. Sybase does not recommend modifying these parameters
   without first discussing the change with Sybase Tech Support. This
   list is provided for information only.
   
   These are categorized into two kinds:
     * Configurable through sp_configure and
     * not configurable but can be changed through 'buildmaster
       -y<variable>=value -d<dbdevice>'
       
   
   
  Configurable variables:
  
   crecinterval:
   
   The recovery interval specified in minutes.
   
   ccatalogupdates:
   
   A flag to inform whether system catalogs can be updated or not.
   
   cusrconnections:

                This is the number of user connections allowed in SQL
                Server.  This value + 3 (one for checkpoint, network
                and mirror handlers) make the number of pss configured
                in the server.

   
     _________________________________________________________________
   
   cfgpss:

                Number of PSS configured in the server. This value will
                always be 3 more than cusrconnections. The reason is we
                need PSS for checkpoint, network and mirror handlers.

                THIS IS NOT CONFIGURABLE.

   
     _________________________________________________________________
   
   cmemsize:

                The total memory configured for the Server in 2k
                units.  This is the memory the server will use for both
                Server and Kernel Structures.  For Stratus or any 4k
                pagesize implementation of SQL Server, certain values
                will change as appropriate.

   
   
   cdbnum:

                This is the number of databases that can be open in SQL
                Server at any given time.

   
   
   clocknum:

                Variable that defines and controls the number of logical
                locks configured in the system.

   
   
   cdesnum:

                This is the number of open objects that can be open at
                a given point of time.

   
   
   cpcacheprcnt:

                This is the percentage of cache that should be used
                for procedures to be cached in.

   
   
   cfillfactor:
   
   Fill factor for indexes.
   
   ctimeslice:

                This value is in units of milli-seconds. This value determines
                how much time a task is allowed to run before it yields.
                This value is internally converted to ticks. See below
                the explanations for cclkrate, ctimemax etc.

   
   
   ccrdatabasesize:

                The default size of the database when it is created.
                This value is Megabytes and the default is 2Meg.

   
   
   ctappreten:
   
   An outdated not used variable.
   
   crecoveryflags:

                A toggle flag which will display certain recovery information
                during database recoveries.

   
   
   cserialno:

                An informational variable that stores the serial number
                of the product.

   
   
   cnestedtriggers:
   
   Flag that controls whether nested triggers allowed or not.
   
   cnvdisks:

                Variable that controls the number of device structures
                that are allocated which affects the number of devices
                that can be opened during server boot up. If user
                defined 20 devices and this value is configured to be
                10, during recovery only 10 devices will be opened and
                the rest will get errors.
cfgsitebuf:
                This variable controls maximum number of site handler
                structures that will be allocated. This in turn
                controls the number of site handlers that can be
                active at a given instance.
cfgrembufs:
                This variable controls the number of remote buffers
                that needs to send and receive from remote sites.
                Actually this value should be set to number of
                logical connections configured. (See  below)
cfglogconn:
                This is the number of logical connections that can
                be open at any instance. This value controls
                the number of resource structure allocated and
                hence it will affect the overall logical connection
                combined with different sites. THIS IS NOT PER SITE.

   
   
   cfgdatabuf:

                Maximum number of pre-read packets per logical connections.
                If logical connection is set to 10, and cfgdatabuf is set
                to 3 then the number of resources allocated will be
                30.

   
   
   cfupgradeversion:
   
   Version number of last upgrade program ran on this server.
   
   csortord:
   
   Sort order of the SQL Server.
   
   cold_sortdord:

                When sort orders are changed the old sort order is
                saved in this variable to be used during recovery
                of the database after the Server is rebooted with
                the sort order change.

   
   
   ccharset:
   
   Character Set used by the SQL server
   
   cold_charset:

                Same as cold_sortord except it stores the previous
                Character Set.

   
     _________________________________________________________________
   
   cdflt_sortord:

                page # of sort order image definition. This should
                not be changed at any point. This is a server only
                variable.

   
   
   cdflt_charset:

                page # of character set image definition. This should
                not be changed at any point. This is a server only
                variable.

   
   
   cold_dflt_sortord:

                page # of previous sort order image definition. This
                should not be changed at any point. This is a server
                only variable.

   
   
   cold_dflt_charset:

                page # of previous chracter set image definition. This
                should not be changed at any point.  This is a server
                only variable.

   
     _________________________________________________________________
   
   
   
   cdeflang:
   
   Default language used by SQL Server.
   
   cmaxonline:

        Maximum number of engines that can be made online. This
        number should not be more than the # of cpus available on this
        system. On Single CPU system like RS6000 this value is always
        1.

   
   
   cminonline:

        Minimum number of engines that should be online. This is 1 by
        default.

   
   
   cengadjinterval:
   
   A noop variable at this time.
   
   cfgstacksz:

        Stack size per task configured. This doesn't include the guard
        area of the stack space. The guard area can be altered through
        cguardsz.

   
     _________________________________________________________________
   
   cguardsz:

        This is the size of the guard area. The Sql Server will
        allocate stack space for each task by adding cfgstacksz
        (configurable through sp_configure) and cguardsz (default is
        2K).  This has to be a multiple of PAGESIZE which will be 2k
        or 4k depending on the implementation.

   
   
   cstacksz:

        Size of fixed stack space allocated per task including the
        guard area.

   
     _________________________________________________________________
   
   
   
   Non-configurable values :
     _________________________________________________________________
   
   _TIMESLICE, CTIMEMAX ETC:_
     _________________________________________________________________
   
   
   
   1 millisecond = 1/1000th of a second.
   1 microsecond = 1/1000000th of a second. "Tick" : Interval between two
   clock interrupts occur in real time. 
   
  "cclkrate" :
  
   

        A value specified in microsecond units.
        Normally on systems where a fine grained timer is not available
        or if the Operating System cannot set sub-second alarms, this
        value is set to 1000000 milliseconds which is 1 second. In
        other words an alarm will go off every 1 second or you will
        get 1 tick per second.

        On Sun4 this is set to 100000 milliseconds which will result in
        an interrupt going at 1/10th of a second. You will get 6 ticks
        per second.

   
   
  "avetimeslice" :
  
   

        A value specified in millisecond units.
        This is the value given in "sp_configure",<timeslice value>.
        Otherwise the milliseconds are converted to milliseconds and
        finally  to tick values.

                ticks = <avetimeslice> * 1000 / cclkrate.

   
   
   "timeslice" :
     _________________________________________________________________
   

        The unit of this variable is in ticks.
        This value is derived from "avetimeslice". If "avetimeslice"
        is less than 1000 milliseconds then timeslice is set to 1 tick.

   
   
  "ctimemax" :
  
   
   
   The unit of this variable is in ticks.

        A task is considered in infinite loop if the consumed ticks
        for a particular task is greater than ctimemax value. This
        is when you get timeslice -201 or -1501 errors.

   
   
  "cschedspins" :
  
   For more information see Q1.13.

        This value alters the behavior of the SQL Server scheduler.
        The scheduler will either run a qualified task or look
        for I/O completion or sleep for a while before it can
        do anything useful.

        The cschedspins value determines how often the scheduler
        will sleep and not how long it will sleep. A low value
        will be suited for a I/O bound SQL Server but a
        high value will be suited for CPU bound SQL Server. Since
        the SQL Server will be used in a mixed mode, this value
        need to be fined tuned.

        Based on practical behavior in the field, a single engine
        SQL Server should have cschedspins set to 1 and a multi-engine
        server should have set to 2000.

   
   
   Now that we've defined the units of these variables what happens when
   we change cclkrate ?
   
   Assume we have a cclkrate=100000.
   
   A clock interrupt will occur every (100000/1000000) 1/10th
   milliseconds. Assuming a task started with 1 tick which can go upto
   "ctimemax=1500" ticks can potentially take 1/10us * (1500 + 1) ticks
   which will be 150 milliseconds or approx. .15 milliseconds per task.
   
   Now changing the cclkrate to 75000
   
   A clock interrupt will occur every (75000/1000000) 1/7th milliseconds.
   Assuming a task started with 1 tick which can go upto ctimemax=1500
   ticks can potentially take 1/7us * (1500 + 1) ticks which will be 112
   milliseconds or approx. .11 milliseconds per task.
   
   Decreasing the cclkrate value will decrease the time spent on each
   task. If the task couldnot voluntarily yield within the time, the
   scheduler will kill the task.
   
   UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The
   default ctimemax value should be set to 1500. This is an empirical
   value and this can be changed under special circumstances and strictly
   under the guidance of DSE.
     _________________________________________________________________
   
   
   
   cfgdbname:

                Name of the master device is saved here. This is 64
                bytes in length.

   
   
   cfgpss:

                This is a derived value from cusrconnections + 3.
                See cusrconnections above.

   
   
   cfgxdes:

                This value defines the number of transactions that
                can be done by a task at a given instance.
                Changing this value to be more than 32 will have no
                effect on the server.
cfgsdes:
                This value defines the number of open tables per
                task. This will be typically for a query. This
                will be the number of tables specified in a query
                including subqueries.

                Sybase Advises not to change this value. There
                will be significant change in the size of per user
                resource in SQL Server.

   
   
   cfgbuf:

                This is a derived variable based on the total
                memory configured and subtracting different resource
                sizes for Databases, Objects, Locks and other
                Kernel memories.

   
   
   cfgdes:
   
   This is same as cdesnum. Other values will have no effect on it.
   
   cfgprocedure:
   
   This is a derived value. Based on cpcacheprcnt variable.
   
   cfglocks:
   
   This is same as clocknum. Other values will have no effect on it.
   
   cfgcprot:

        This is variable that defines the number of cache protectors per
        task. This is used internally by the SQL Server.

        Sybase advise not to modify this value as a default of 15 will
        be more than sufficient.

   
   
   cnproc:

        This is a derived value based on cusrconnections + <extra> for
        Sybase internal tasks that are both visible and non-visible.

   
   
   cnmemmap:

        This is an internal variable that will keep track of SQL Server
        memory.

        Modifying this value will not have any effect.

   
   
   cnmbox:

        Number of mail box structures that need to be allocated.
        More used in VMS environment than UNIX environment.

   
   
   cnmsg:
   
   Used in tandem with cnmbox.
   
   cnmsgmax:
   
   Maximum number of messages that can be passed between mailboxes.
   
   cnblkio:

        Number of disk I/O request (async and direct) that can be
        processed at a given instance. This is a global value for all
        the engines and not per engine value.

        This value is directly depended on the number of I/O request
        that can be processed by the Operating System. It varies
        depending on the Operating System.

   
   
   cnblkmax:

        Maximum number of I/O request that can be processed at any given
        time.

        Normally cnblkio,cnblkmax and cnmaxaio_server should be the same.

   
   
   cnmaxaio_engine:

        Maximum number of I/O request that can be processed by one engine.
        Since engines are Operating System Process, if there is any limit
        imposed by the Operating System on a per process basis then
        this value should be set. Otherwise it is a noop.

   
   
   cnmaxaio_server:

        This is the total number of I/O request the SQL Server can do.
        This value s directly depended on the number of I/O request
        that can be processed by the Operating System. It varies
        depending on the Operating System.

   
   
   csiocnt:
   
   not used.
   
   cnbytio:

        Similar to disk I/O request, this is for network I/O request.
        This includes disk/tape dumps also. This value is for
        the whole SQL Server including other engines.

   
   
   cnbytmax:
   
   Maximum number of network I/O request including disk/tape dumps.
   
   cnalarm:

        Maximum number of alarms including the alarms used by
        the system. This is typically used when users do "waitfor delay"
        commands.

   
   
   cfgmastmirror:
   
   Mirror device name for the master device.
   
   cfgmastmirror_stat:

        Status of mirror devices for the master device  like  serial/dynamic
        mirroring etc.

   
   
   cindextrips:

        This value determines the aging of a index buffer before it
        is removed from the cache.

   
   
   coamtrips:

        This value determines the aging of a OAM buffer before it
        is removed from the cache.

   
   
   cpreallocext:

        This value determines the number of extents that will be
        allocated while doing BCP.

   
   
   cbufwashsize:

        This value determines when to flush buffers in the cache
        that are modified.

                         Q1.21: DBCC COMMAND REFERENCE
                                       
   
     _________________________________________________________________
   
   If you know of any more DBCC Commands, please mail to pablo@sgi.com.
   For your consumption here they are, use at your own risk:
   
     * allocdump( dbid, page )
     * bhash( { print_bufs | no_print }, bucket_limit )
     * buffer( [ dbid ][, objid ][, nbufs ], printopt = { 0 | 1 | 2 },
       buftype )
     * bytes( startaddress, length )
     * checkalloc[ ( dbname [, fix | nofix ] ) ]
     * checkdb[( dbname [, skip_ncindex ] ) ]
     * checktable( tablename | tabid [, skip_ncindex ] )
     * dbinfo( [ dbname ] )
     * dbrepair( dbid, option = { dropdb | fixindex | fixsysindex },
       table, indexid )
     * dbtable( dbid )
     * delete_row( dbid, pageid, delete_by_row = { 1 | 0 }, rownum )
     * des( [ dbid ][, objid ] )
     * extentcheck( dbid, objid, indexid, sort = {1|0} )
     * extentdump( dbid, page )
     * extentzap( dbid, objid, indexid, sort )
     * findnotfullextents( dbid, objid, indexid, sort = { 1 | 0 } )
     * fix_al( [ dbname ] )
     * help( dbcc_command )
     * ind( dbid, objid, printopt = { 0 | 1 | 2 } )
     * indexalloc(tablename|tabid, indid, [full | optimized | fast],[fix
       | nofix])
     * locateindexpgs( dbid, objid, page, indexid, level )
     * lock
     * log(
       [dbid][,objid][,page][,row][,nrecords][,type={-1..36}],printopt={0
       |1} )
     * memusage
     * netmemshow( option = {1 | 2 | 3} )
     * netmemusage
     * newalloc( dbname, option = { 1 | 2 | 3 } )
     * page( dbid, pagenum [, printopt={0|1|2} ][, cache={0|1} ][,
       logical={1|0} ] )
     * pglinkage( dbid, start, number, printopt={0|1|2}, target,
       order={1|0} )
     * pktmemshow( option = {spid} )
     * procbuf( dbid, objid, nbufs, printopt = { 0 | 1 } )
     * prtipage( dbid, objid, indexid, indexpage )
     * pss( suid, spid, printopt = { 1 | 0 } )
     * rebuildextents( dbid, objid, indexid )
     * resource
     * show_bucket( dbid, pageid, lookup_type )
     * tab( dbid, objid, printopt = { 0 | 1 | 2 } )
     * tablealloc(tablename|tabid, [full | optimized | fast],[fix |
       nofix])
     * traceoff( tracenum [, tracenum ... ] )
     * traceon( tracenum [, tracenum ... ] )
     * undo( dbid, pageno, rowno )
       
   
     _________________________________________________________________

                          Q1.22: HOW TO SET _TS ROLE_
                                       
   
     _________________________________________________________________
   
   Some _DBCC_ commands require that you set _TS Role_ in order to run
   them. Here's how to set it:
   
     Login to Server as _sa_ and perform the following:

sp_role "grant", sybase_ts_role, sa
go
set role "sybase_ts_role" on
go

   
     _________________________________________________________________

                      Q1.23: HOW TO CLEAR A _LOG SUSPEND_
                                       
   
     _________________________________________________________________
   
   In System 10, when you create a database with its log on its own
   device SQL Server will tell you the threshold of your log with the
   number of log pages and as you alter the log segment the threshold
   will then becoming larger.
   
   In this case you're expected to write this info down so you can create
   a threshold manager by creating a stored procedure that dumps the
   transaction whenever SQL Server hits that threshold. This way you
   never get to see error 1105. Otherwise, what's going to happen is that
   SQL Server will suspend all the activities when the threshold is hit.
   Anything you do including _dump tran with no_log_ will not clear the
   log and the only way you can clear it is by unsuspending the process
   by the following command:
   

1> select lct_admin ("unsuspend", db_id)
2> go

     immediately followed by:

    dump tran _db_name_ with truncate_only

     or when the above command doesn't work you need to issue:

    dump tran _db_name_ with no_log

   If you wish to retain pre System 10 behavior where SQL Server will
   abort the process when the threshold is hit then you need to set
   _abort xact on log full_ through _sp_dboption_ on each database. That
   way when the threshold is hit your process will be aborted, the
   transaction will be rolled back and you won't see your process gets
   into _log suspend_ mode.
     _________________________________________________________________

                      Q1.24: HOW TO MANUALLY DROP A TABLE
                                       
   
     _________________________________________________________________
   
   Occasionally you may find that after issuing a _drop table_ command
   that the SQL Server crashed and consequently the table didn't drop
   entirely. Sure you can't see it but that sucker is still floating
   around somewhere.
   
   Here's a list of instructions to follow when trying to drop a corrupt
   table:
    1.

    sp_configure allow, 1
    go
    reconfigure with override
    go

    2. Write _db_id_ down.

    use _db_name_
    go
    select db_id()
    go

    3. Write down the _id_ of the _bad_table_:

    use master
    go
    select id from sysobjects where name = _bad_table_name_
    go

    4. You will need these index IDs to run _dbcc extentzap_. Also,
       remember that if the table has a clustered index you will need to
       run _extentzap_ on index "0", even though there is no sysindexes
       entry for that indid.

    select indid from sysindexes where id = _table_id_
    go

    5. This is not required but a good idea:

    begin transaction
    go

    6. Type in this short script, this gets rid of all system catalog
       information for the object, including any object and procedure
       dependencies that may be present.
       
       Some of the entries are unnecessary but better safe than sorry.

     declare @obj int
     select @obj = id from sysobjects where name =
     delete syscolumns where id = @obj
     delete sysindexes where id = @obj
     delete sysobjects where id = @obj
     delete sysprocedures where id in
            (select id from sysdepends where depid = @obj)
     delete sysdepends where depid = @obj
     delete syskeys where id = @obj
     delete syskeys where depid = @obj
     delete sysprotects where id = @obj
     delete sysconstraints where tableid = @obj
     delete sysreferences where tableid = @obj
     go

    7. Just do it!

    commit transaction
    go

    8. Gather information to run _dbcc extentzap_:

    sp_dboption _db_name_, read, true
    go
    use _db_name_
    go
    checkpoint
    go

    9. Run _dbcc extentzap_ once for _each_ index (including index 0, the
       data level) that you got from above:

    use master
    go
    dbcc traceon (3604)
    go
    dbcc extentzap (_db_id_, _obj_id_, _indx_id_, 0)
    go
    dbcc extentzap (_db_id_, _obj_id_, _indx_id_, 1)
    go


     Notice that extentzap runs _twice_ for each index. This is because
     the last parameter (the _sort_ bit) might be 0 or 1 for each index,
     and you want to be absolutely sure you clean them all out.
   10. Clean up after yourself.

    sp_dboption _db_name_, read, false
    go
    use _db_name_
    go
    checkpoint
    go
    sp_configure allow, 0
    go
    reconfigure with override
    go


   
     _________________________________________________________________

                   Q1.25: HOW DO I CORRECT _TIMESLICE -201_
                                       
   
     _________________________________________________________________
   
  Why Increase It?
  
   Basically, it will allow for a task to be scheduled onto the CPU in a
   longer time. Each task on the system is scheduled onto the CPU for a
   fixed period of time, called the timeslice, during which it does some
   work, which is resumed when its next turn comes around.
   
   The process has up until the value of _ctimemax_ (a config block
   variable) to finish its task. As the task is working away, the
   scheduler counts down ctimemax units. When it gets to the value of
   _ctimemax_ - 1, if it gets _stuck_ and for some reason cannot be taken
   off the CPU, then a timeslice error gets generated and the process
   gets infected.
   
   On the other hand, SQL Server will allow a Server process to run as
   long as it needs to. It will not swap the process out for another
   process to run. The process will decide when it is "done" with the
   Server CPU. If, however, a process goes on and on and never
   relinquishes the Server CPU, then Server will timeslice the process.
   
  Potential Fix
    1. Shutdown the SQL Server
    2. %buildmaster -d_your_device_ -yctimemax=2000
    3. Restart your SQL Server. If the problem persists contact Sybase
       Technical Support notifying them what you have done already.
       
   
     _________________________________________________________________

           Q1.26: HOW DO I PIPE THE OUTPUT OF ONE _ISQL_ TO ANOTHER?
                                       
   
     _________________________________________________________________
   
   The following example queries _sysdatabases_ and takes each database
   name and creates a string of the sort _sp_helpdb dbname_ and sends the
   results to another _isql_. This is accomplished using bourne shell
   _sh(1)_ and _sed(1)_ to strip unwanted output (see Q1.9):
   

#!/bin/sh

PASSWD=yuk
DSQUERY=GNARLY_HAIRBALL

echo "$PASSWD
print \"$PASSWD\"
go
select 'sp_helpdb ' + name + char(10) + 'go' from sysdatabases
go" | isql -U sa -S $DSQUERY -w 1000 | \
   sed -e '/affected/d' -e '/---/d' -e '/Password:/d' | \
   isql -U sa -S $DSQUERY -w 1000

   To help you understand this you may wish to comment any series of
   pipes and see what output is being generated.
     _________________________________________________________________

           Q1.27: HOW DO I TURN OFF _MARKED SUSPECT_ ON MY DATABASE?
                                       
   
     _________________________________________________________________
   
   Say one of your database is marked suspect as the SQL Server is coming
   up. Here are the steps to take to unset the flag.
   
     _Remember to fix the problem that caused the database to be marked
     suspect after switching the flag. _
     
Pre System 10

    1. sp_configure "allow", 1
    2. reconfigure with override
    3. select status - 320 from sysdatabases where dbid =
       db_id("my_hosed_db") - save this value.
    4. begin transaction
    5. update sysdatabases set status = _-32767_ where dbid =
       db_id("my_hosed_db")
    6. commit transaction
    7. you should be able to access the database for it to be cleared
       out. If not:
         1. shutdown
         2. startserver -f RUN_*
    8. _fix the problem that caused the database to be marked suspect_
    9. begin transaction
   10. update sysdatabases set status = _saved_value_ where dbid =
       db_id("my_hosed_db")
   11. commit transaction
   12. sp_configure "allow", 0
   13. reconfigure
       
System 10

    1. sp_configure "allow", 1
    2. reconfigure with override
    3. select status - 320 from sysdatabases where dbid =
       db_id("my_hosed_db") - save this value.
    4. begin transaction
    5. update sysdatabases set status = _-32768_ where dbid =
       db_id("my_hosed_db")
    6. commit transaction
    7. shutdown
    8. startserver -f RUN_*
    9. _fix the problem that caused the database to be marked suspect_
   10. begin transaction
   11. update sysdatabases set status = _saved_value_ where dbid =
       db_id("my_hosed_db")
   12. commit transaction
   13. sp_configure "allow", 0
   14. reconfigure
   15. shutdown
   16. startserver -f RUN_*
       
   
     _________________________________________________________________

                Q1.28: CERTIFIED SYBASE PROFESSIONAL - _CSPDBA_
                                       
   
     _________________________________________________________________
   
   Here's a list of commonly asked questions about becoming a _CSPDBA_:
   
  What are the exams like?
  
   The exams are administered by Drake Testing and Technologies and are
   given at Drake authorized testing centers. The Environment and
   Operations exams each take an hour, and the Fundamentals exam takes an
   hour and a half. Each exam contains between 60 and 90 questions. Many
   of the questions are _multiple choice_, some are _select all that
   apply_ and some are _fill in the blank_. Depending on the exam, a
   score of 67% - 72% is required to pass. The exams are challenging, but
   fair.
   
   Before taking an exam, Drake provides you with a short _tutorial exam_
   that you can take to get an idea of the format of the exam questions.
   
   You receive a report each time you complete an exam. The report shows
   the passing score, your total score, and your score in various
   sections of the exam. (You aren't told which specific questions you
   answered correctly or incorrectly.)
   
  How do I register for the exams?
  
   Call 1-800-8SYBASE, select option 2, then option 2 again. You will be
   connected to a Drake representative. Currently each exam costs $150.
   
  What happens once I pass?
  
   You will receive a certificate in the mail about a month after you've
   passed all the exams. When you receive your certificate, you'll also
   have the opportunity to enter into a licensing agreement that will
   allow you to use the Certified Sybase Professional service mark (logo)
   in your office and on your business cards. If your company is an Open
   Solutions partner, your certification is acknowledged by the
   appearance of the CSP logo with your company's name in the Open
   Solutions Directory. If you have a CompuServe account, you can obtain
   access to a private section of _Sybase OpenLine_, a technical forum on
   CompuServe.
   
  What topics are covered?
     * Sybase SQL Server Fundamentals Exam Topics:
          + Sybase client/server architecture
          + SQL Server objects
          + Use of tables
          + Use of indexes
          + Use of columns
          + Use of defaults
          + Use of triggers
          + Use of keys
          + Use of check constraints
          + Use of datatypes
          + Use of cursors
          + System datatypes
          + Views
          + Data integrity
          + Rules
          + Select statements
          + Transaction management
          + Locking
          + Stored procedures
          + Local and global variables
     * Sybase SQL Server Environment Exam Topics:
          + Configuration and control
          + Starting the SQL Server
          + Accessing remote servers
          + Stopping the SQL Server
          + Using buildmaster
          + Installing the SQL Server
          + Using the standard databases
          + Admin Utilities and Tools
          + System stored procedures
          + Using system tables
          + Load and unload utilities
          + Resources
          + Disk mirroring
          + Creating databases
          + Managing segments
          + Managing transaction logs
          + Managing thresholds
          + Managing audit logs
          + Devices
          + Security
          + Establishing security
          + Roles
          + Managing user accounts
     * Sybase SQL Server Operations Exam Topics:
          + Monitoring
          + Starting the Backup Server
          + Monitoring the errorlog
          + Diagnostics
          + Resolving contention and locking problems
          + Managing application stored procedures
          + Recovery
          + Backup
          + Load
          + Backup strategies
          + Security
          + Establishing security
          + Roles
          + Managing user accounts
          + Admin utilities and tools
          + System stored procedures
          + Using system tables
          + Load and unload utilities
            
   
     _________________________________________________________________

                     Q1.29: WHAT IS _CMAXPKTSZ_ GOOD FOR?
                                       
   
     _________________________________________________________________
   
   _cmaxpktsz_ corresponds to the parameter "maximum network packet size"
   which you can see through _sp_configure_. I recommend only updating
   this value through _sp_configure_. If some of your applications send
   or receive large amounts of data across the network, these
   applications can achieve significant performance improvement by using
   larger packet sizes. Two examples are large bulk copy operations and
   applications reading or writing large text or image values. Generally,
   you want to keep the value of default network packet size small for
   users performing short queries, and allow users who send or receive
   large volumes of data to request larger packet sizes by setting the
   maximum network packet size configuration variable.
   
   _caddnetmem_ corresponds to the parameter "additional netmem" which
   you can see through _sp_configure_. Again, I recommend only updating
   this value through _sp_configure_. "additional netmem" sets the
   maximum size of additional memory that can be used for network packets
   that are larger than SQL Server's default packet size. The default
   value for additional netmem is 0, which means that no extra space has
   been allocated for large packets. See the discussion below, under
   maximum network packet size, for information on setting this
   configuration variable. Memory allocated with additional netmem is
   added to the memory allocated by memory. It does not affect other SQL
   Server memory uses.
   
   SQL Server guarantees that every user connection will be able to log
   in at the default packet size. If you increase maximum network packet
   size and additional netmem remains set to 0, clients cannot use packet
   sizes that are larger than the default size: all allocated network
   memory will be reserved for users at the default size. In this
   situation, users who request a large packet size when they log in
   receive a warning message telling them that their application will use
   the default size. To determine the value for additional netmem if your
   applications use larger packet sizes:
     * Estimate the number of simultaneous users who will request the
       large packet sizes, and the sizes their applications will request.
     * Multiply this sum by three, since each connection needs three
       buffers.
     * Add 2% for overhead, rounded up to the next multiple of 512
       
   
     _________________________________________________________________

                          Q1.30: FAQ ON PARTITIONING
                                       
   
     _________________________________________________________________
   
  Index of Sections
     * What Is Table Partitioning?
          + Page Contention for Inserts
          + I/O Contention
          + Caveats Regarding I/O Contention
     * Can I Partition Any Table?
          + How Do I Choose Which Tables To Partition?
     * Does Table Partitioning Require User-Defined Segments?
     * Can I Run Any Transact-SQL Command on a Partitioned Table?
     * How Does Partition Assignment Relate to Transactions?
     * Can Two Tasks Be Assigned to the Same Partition?
     * Must I Use Multiple Devices to Take Advantage of Partitions?
     * How Do I Create A Partitioned Table That Spans Multiple Devices?
     * How Do I Take Advantage of Table Partitioning with bcp in?
     * Getting More Information on Table Partitioning
       
   
   
    What Is Table Partitioning?
    
   Table partitioning is a procedure that creates multiple page chains
   for a single table.
   
   The primary purpose of table partitioning is to improve the
   performance of concurrent inserts to a table by reducing contention
   for the last page of a page chain.
   
   Partitioning can also potentially improve performance by making it
   possible to distribute a table's I/O over multiple database devices. 
   
      Page Contention for Inserts
      
   By default, SQL Server stores a table's data in one double-linked set
   of pages called a page chain. If the table does not have a clustered
   index, SQL Server makes all inserts to the table in the last page of
   the page chain.
   
   When a transaction inserts a row into a table, SQL Server holds an
   exclusive page lock on the last page while it inserts the row. If the
   current last page becomes full, SQL Server allocates and links a new
   last page.
   
   As multiple transactions attempt to insert data into the table at the
   same time, performance problems can occur. Only one transaction at a
   time can obtain an exclusive lock on the last page, so other
   concurrent insert transactions block each other.
   
   Partitioning a table creates multiple page chains (partitions) for the
   table and, therefore, multiple last pages for insert operations. A
   partitioned table has as many page chains and last pages as it has
   partitions. 
   
      I/O Contention
      
   Partitioning a table can improve I/O contention when SQL Server writes
   information in the cache to disk. If a table's segment spans several
   physical disks, SQL Server distributes the table's partitions across
   fragments on those disks when you create the partitions.
   
   A fragment is a piece of disk on which a particular database is
   assigned space. Multiple fragments can sit on one disk or be spread
   across multiple disks.
   
   When SQL Server flushes pages to disk and your fragments are spread
   across different disks, I/Os assigned to different physical disks can
   occur in parallel.
   
   To improve I/O performance for partitioned tables, you must ensure
   that the segment containing the partitioned table is composed of
   fragments spread across multiple physical devices. 
   
      Caveats Regarding I/O Contention
      
   Be aware that when you use partitioning to balance I/O you run the
   risk of disrupting load balancing even as you are trying to achieve
   it. The following scenarios can keep you from gaining the load
   balancing benefits you want:
     * You are partitioning an existing table. The existing data could be
       sitting on any fragment. Because partitions are randomly assigned,
       you run the risk of filling up a fragment. The partition will then
       steal space from other fragments, thereby disrupting load
       balancing.
     * Your fragments differ in size.
     * The segment maps are configured such that other objects are using
       the fragments to which the partitions are assigned.
     * A very large bcp job inserts many rows within a single
       transaction. Because a partition is assigned for the lifetime of a
       transaction, a huge amount of data could go to one particular
       partition, thus filling up the fragment to which that partition is
       assigned.
       
   
   
    Can I Partition Any Table?
    
   No. You cannot partition the following kinds of tables:
    1. Tables with clustered indexes
    2. SQL Server system tables
    3. Work tables
    4. Temporary tables
    5. Tables that are already partitioned. However, you can unpartition
       and then re-partition tables to change the number of partitions.
       
   
   
      How Do I Choose Which Tables To Partition?
      
   You should partition heap tables that have large amounts of concurrent
   insert activity. (A heap table is a table with no clustered index.)
   Here are some examples:
    1. An "append-only" table to which every transaction must write
    2. Tables that provide a history or audit list of activities
    3. A new table into which you load data with bcp in. Once the data is
       loaded in, you can unpartition the table. This enables you to
       create a clustered index on the table, or issue other commands not
       permitted on a partition table.
       
   
   
    Does Table Partitioning Require User-Defined Segments?
    
    No. By design, each table is intrinsically assigned to one segment,
   called the default segment. When a table is partitioned, any
   partitions on that table are distributed among the devices assigned to
   the default segment.
   
   In the example under "How Do I Create A Partitioned Table That Spans
   Multiple Devices?", the table sits on a user-defined segment that
   spans three devices. 
   
    Can I Run Any Transact-SQL Command on a Partitioned Table?
    
    No. Once you have partitioned a table, you cannot use any of the
   following Transact-SQL commands on the table until you unpartition it:
    1. create clustered index
    2. drop table
    3. sp_placeobject
    4. truncate table
    5. alter table table_name partition n
       
   
   
    How Does Partition Assignment Relate to Transactions?
    
    A user is assigned to a partition for the duration of a transaction.
   Assignment of partitions resumes with the first insert in a new
   transaction. The user holds the lock, and therefore partition, until
   the transaction ends.
   
   For this reason, if you are inserting a great deal of data, you should
   batch it into separate jobs, each within its own transaction. See "How
   Do I Take Advantage of Table Partitioning with bcp in?", for details. 
   
    Can Two Tasks Be Assigned to the Same Partition?
    
   Yes. SQL Server randomly assigns partitions. This means there is
   always a chance that two users will vie for the same partition when
   attempting to insert and one would lock the other out.
   
   The more partitions a table has, the lower the probability of users
   trying to write to the same partition at the same time. 
   
    Must I Use Multiple Devices to Take Advantage of Partitions?
    
    It depends on which type of performance improvement you want.
   
   Table partitioning improves performance in two ways: primarily, by
   decreasing page contention for inserts and, secondarily, by decreasing
   i/o contention. "What Is Table Partitioning?" explains each in detail.
   
   
   If you want to decrease page contention you do not need multiple
   devices. If you want to decrease i/o contention, you must use multiple
   devices. 
   
    How Do I Create A Partitioned Table That Spans Multiple Devices?
    
    Creating a partitioned table that spans multiple devices is a
   multi-step procedure. In this example, we assume the following:
     * We want to create a new segment rather than using the default
       segment.
     * We want to spread the partitioned table across three devices,
       data_dev1, data_dev2, and data_dev3.
       
   Here are the steps:
    1. Define a segment:
       
     sp_addsegment newsegment, my_database,data_dev1
    2. Extend the segment across all three devices:
       
     sp_extendsegment newsegment, my_database, data_dev2
     sp_extendsegment newsegment, my_database, data_dev3
    3. Create the table on the segment:
       
     create table my_table
     (names, varchar(80) not null)
     on newsegment
    4. Partition the table:
       
     alter table my_table partition 30
     
   
   
    How Do I Take Advantage of Table Partitioning with bcp in?
    
    You can take advantage of table partitioning with bcp in by following
   these guidelines:
    1. Break up the data file into multiple files and simultaneously run
       each of these files as a separate bcp job against one table.
       
       Running simultaneous jobs increases throughput.
    2. Choose a number of partitions greater than the number of bcp jobs.
       
       
       Having more partitions than processes (jobs) decreases the
       probability of page lock contention.
    3. Use the batch option of bcp in. For example, after every 100 rows,
       force a commit. Here is the syntax of this command:
       
     bcp table_name in filename -b100
   Each time a transaction commits, SQL Server randomly assigns a new
       partition for the next insert. This, in turn, reduces the
       probability of page lock contention.
       
   
   
    Getting More Information on Table Partitioning
    
   For more information on table partitioning, see the chapter on
   controlling physical data placement in the SQL Server Performance and
   Tuning Guide.
     _________________________________________________________________

                   Q1.31: SHRINKING VARCHAR(M) TO VARCHAR(N)
                                       
   
     _________________________________________________________________
   
   Before you start:
   
     select max(datalength(column_name)) from affected_table
     
   In other words, _please_ be sure you're going into this with your head
   on straight.
   
  How To Change System Catalogs
  
   This information is Critical To The Defense Of The Free World, and you
   would be Well Advised To Do It Exactly As Specified:
    1. In master:
       
     sp_configure allow, 1 reconfigure with override
    2. Use the victim database.
    3. _begin tran_ - that way, if you upscrew, you can correct the
       damage.
    4. Perform your update.
    5. _check your results_. Did you get the expected number of rows
       affected? When you _select_ the rows you thought you changed, do
       you see your changes? If not, _rollback tran_ and try again.
    6. When everything is as it should be, _commit tran_
    7. In master:
       
     sp_configure allow, 0
     reconfigure
   ...after all, you don't want some other bright kid coming in and
       making other changes behind yer back. You know what you're doing
       and why, but you'd be amazed how many others out there don't.
       
   
     _________________________________________________________________
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@corp.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]

----------------------------------------------------------------------

Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo
From: pablo@sgi.com (Pablo Sanchez)
Newsgroups: comp.databases.sybase,comp.answers,news.answers
Subject: Sybase FAQ: 3/8 - section 2
Supersedes: <FAQ.section_2_833665144@sgi.com>
Followup-To: comp.databases.sybase
Date: 1 Jul 1996 14:30:00 GMT
Organization: Silicon Graphics, Inc. Nederland, CO. USA
Lines: 569
Approved: news-answers-request@MIT.EDU
Message-ID: <FAQ.section_2_836231368@sgi.com>
References: <FAQ.section_1_836231368@sgi.com>
Reply-To: pablo@sgi.com
NNTP-Posting-Host: mew.corp.sgi.com
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly
Originator: pablo@mew.corp.sgi.com
Xref: news1.ucsd.edu comp.databases.sybase:29419 comp.answers:15542 news.answers:62010

Archive-name: databases/sybase-faq/part3
URL: http://reality.sgi.com/pablo/Sybase_FAQ


Q2.1: Point Characteristic Functions

----------------------------------------------------------------------------
These functions return zero if the condition on columns a and b is not true
and one if it is true.

                Equation              Emulation

                a=b      1-abs(sign(a-b))

                a!=b     abs(sign(a-b))

                a<b      1-sign(1+sign(a-b))

                a<=b     sign(1-sign(a-b))

                a>b      1-sign(1-sign(a-b))

                x
                between  sign(1+sign(b-x))-sign(1+sign(a-x))
                a and b

----------------------------------------------------------------------------

          Q2.2: HOW TO IMPLEMENT _IF-THEN-ELSE_ IN A _SELECT_ CLAUSE
                                       
   
     _________________________________________________________________
   
   If you need to implement the following condition in a _select_ clause:
   

if @val = 'small' then
   print 'petit'
else
   print 'grand'
fi

   do the following:
   

select isnull(substring('petit', charindex('small', @val), 255), 'grand')

   To test it out, try the following T-SQL:
   

declare @val char(20)

select @val = 'grand'

select isnull(substring('petit', charindex('small', @val), 255), 'grand')

   
     _________________________________________________________________

                       Q2.3: HOW TO INVERT/PIVOT A TABLE
                                       
   
     _________________________________________________________________
   
   In some applications, it's necessary to store details by row but to
   report the results by column.
   
   Here's an example describing the above problem _and_ a solution for it
   as well - the _sql_ emulates the Oracle _decode_ function:
     * Say you had the following table...

        create table #account
        (acct  int,
         month int,
         amt   int)
        go
             * ...and it was populated as follows

        insert into #account
        select 1, 1, 10
        insert into #account
        select 1, 2, 10
        insert into #account
        select 1, 3, 10
        insert into #account
        select 1, 4, 10
        insert into #account
        select 1, 5, 10
        insert into #account
        select 1, 6, 10
        insert into #account
        select 1, 7, 10
        insert into #account
        select 1, 8, 10
        insert into #account
        select 1, 9, 10
        insert into #account
        select 1, 10, 10
        insert into #account
        select 1, 11, 10
        insert into #account
        select 1, 12, 10
        go

        insert into #account
        select 2, 1, 20
        insert into #account
        select 2, 2, 20
        insert into #account
        select 2, 3, 20
        insert into #account
        select 2, 4, 20
        insert into #account
        select 2, 5, 20
        insert into #account
        select 2, 6, 20
        insert into #account
        select 2, 7, 20
        insert into #account
        select 2, 8, 20
        insert into #account
        select 2, 9, 20
        insert into #account
        select 2, 10, 20
        insert into #account
        select 2, 11, 20
        go
             * So it contained the following data:

        select * from #account
        go
        acct        month       amt
        ----------- ----------- -----------
                  1           1          10
                  1           2          10
                  1           3          10
                  1           4          10
                  1           5          10
                  1           6          10
                  1           7          10
                  1           8          10
                  1           9          10
                  1          10          10
                  1          11          10
                  1          12          10
                  2           1          20
                  2           2          20
                  2           3          20
                  2           4          20
                  2           5          20
                  2           6          20
                  2           7          20
                  2           8          20
                  2           9          20
                  2          10          20
                  2          11          20
             * and you executed the following SQL:

        select  acct,
                sum(amt * (1 - abs(sign(month - 1)))),
                sum(amt * (1 - abs(sign(month - 2)))),
                sum(amt * (1 - abs(sign(month - 3)))),
                sum(amt * (1 - abs(sign(month - 4)))),
                sum(amt * (1 - abs(sign(month - 5)))),
                sum(amt * (1 - abs(sign(month - 6)))),
                sum(amt * (1 - abs(sign(month - 7)))),
                sum(amt * (1 - abs(sign(month - 8)))),
                sum(amt * (1 - abs(sign(month - 9)))),
                sum(amt * (1 - abs(sign(month - 10)))),
                sum(amt * (1 - abs(sign(month - 11)))),
                sum(amt * (1 - abs(sign(month - 12))))
        from #account
        group by acct
             * to achieve the same output:

acct        mth1        mth2        mth3        mth4        mth5        mth6
     mth7        mth8        mth9        mth10       mth11       mth12
 ----------- ----------- ----------- ----------- ----------- ----------- ------
----- ----------- ----------- ----------- ----------- ----------- -----------
           1          10          10          10          10          10
   10          10          10          10          10          10          10
           2          20          20          20          20          20
   20          20          20          20          20          20        NULL


   
     _________________________________________________________________

          Q2.4: HOW TO PAD WITH LEADING ZEROS AN _INT_ OR _SMALLINT_.
                                       
   
     _________________________________________________________________
   
   By example:

declare @Integer        int

/* Good for positive numbers only. */
select @Integer = 1000

select "Positives Only" =
       right( replicate("0", 12) + convert(varchar, @Integer), 12)

/* Good for positive and negative numbers. */
select @Integer = -1000

select "Both Signs" =
       substring( "- +", (sign(@Integer) + 2), 1) +
       right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

select @Integer = 1000

select "Both Signs" =
       substring( "- +", (sign(@Integer) + 2), 1) +
       right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

go

   Produces the following results:

 Positives Only
 --------------
 000000001000

 Both Signs
 -------------
 -000000001000

 Both Signs
 -------------
 +000000001000

   
     _________________________________________________________________

                        Q2.5: DIVIDE BY ZERO AND NULLS
                                       
   
     _________________________________________________________________
   
   During processing, if a divide by zero error occurs you will not get
   the answer you want. If you want the result set to come back and null
   to be displayed where divide by zero occurs do the following:

1> select * from total_temp
2> go
 field1      field2
 ----------- -----------
          10          10
          10           0
          10        NULL

(3 rows affected)
1> select field1, field1/(field2*convert(int,
                  substring('1',1,abs(sign(field2))))) from total_temp
2> go
 field1
 ----------- -----------
          10           1
          10        NULL
          10        NULL

   
     _________________________________________________________________

                   Q2.6: CONVERT MONTHS TO FINANCIAL MONTHS
                                       
   
     _________________________________________________________________
   
   To convert months to financial year months (i.e. July = 1, Dec = 6,
   Jan = 7, June = 12 )

select ... ((sign(sign((datepart(month,GetDate())-6) * -1)+1) *
   (datepart(month, GetDate())+6))
      + (sign(sign(datepart(month, GetDate())-7)+1) *
   (datepart(month, GetDate())-6)))
   ...
  from ...

   
     _________________________________________________________________

                       Q2.7: HIERARCHY TRAVERSAL - BOMS
                                       
   
     _________________________________________________________________
   
   Alright, so you wanna know more about representing hierarchies in a
   relational database? Before I get in to the nitty gritty I should at
   least give all of the credit for this algorithm to:
   "_Hierarical_Structures:_The_Relational_Taboo!_, _(Can_
   Transitive_Closure_Queries_be_Efficient?)_", by Michael J. Kamfonas as
   published in 1992 "Relational Journal" (I don't know which volume or
   issue).
   
   The basic algorithm goes like this, given a tree (hierarchy) that
   looks roughly like this (forgive the ASCII art--I hope you are using a
   fixed font to view this):

                                    a
                                   / \
                                 /     \
                               /         \
                             b             c
                            / \           /|\
                           /   \        /  |  \
                          /     \     /    |   \
                         d       e   f     |    g

     Note, that the tree need not be balanced for this algorithm to work.
     
   The next step assigned two numbers to each node in the tree, called
   left and right numbers, such that the left and right numbers of each
   node contain the left and right numbers of the ancestors of that node
   (I'll get into the algorithm for assigning these left and right
   numbers later, but, _hint: use a depth-first search_):

                                   1a16
                                   / \
                                 /     \
                               /         \
                            2b7           8c15
                            / \           /|\
                           /   \        /  |  \
                          /     \     /    |   \
                        3d4     5e6 9f10 11g12 13h14

     Side Note: The careful observer will notice that these left and
     right numbers look an awful lot like a B-Tree index.
     
   So, you will notice that all of the children of node 'a' have left and
   right numbers between 1 and 16, and likewise all of the children of
   'c' have left and right numbers between 8 and 15. In a slightly more
   relational format this table would look like:

              Table: hier
   node   parent left_nbr  right_nbr
   -----  ------ --------  ---------
   a        NULL        1         16
   b           a        2          7
   c           a        8         15
   d           b        3          4
   e           b        5          6
   f           c        9         10
   g           c       11         12
   h           c       13         14

   So, given a node name, say @node (in Sybase variable format), and you
   want to know all of the children of the node you can do:

    SELECT h2.node
      FROM hier   h1,
           hier   h2
     WHERE h1.node      =   @node
       AND h2.left_nbr  >   h1.left_nbr
       AND h2.left_nbr

If you had a table that contained, say, the salary for each node
in your hierarchy (assuming a node is actually a individual in a company)
you could then figure out the total salary for all of the people
working underneath of @node by doing:


    SELECT sum(s.salary)
      FROM hier   h1,
           hier   h2,
           salary s
     WHERE h1.node      =   @node
       AND h2.left_nbr  >   h1.left_nbr
       AND h2.left_nbr

Pretty cool, eh?  And, conversly, if you wanted to know how much it
cost to manage @node (i.e. the combined salary of all of the boss's
of @node), you can do:


    SELECT sum(s.salary)
      FROM hier   h1,
           hier   h2,
           salary s
     WHERE h1.node      =   @node
       AND h2.left_nbr     h1.right_nbr
       AND s.node       =   h2.node


Now that you can see the algorithm in action everything looks peachy,
however the sticky point is the method in which left and right numbers
get assigned. And, unfortunately, there is no easy method to do this
relationally (it can be done, it just ain't that easy).  For an real-
world application that I have worked on, we had an external program
used to build and maintain the hierarchies, and it was this program's
responsibility to assign the left and right numbers.




But, in brief, here is the algorithm to assign left and right
numbers to every node in a hierarchy.  Note while reading this
that this algorithm uses an array as a stack, however since arrays
are not available in Sybase, they are (questionably) emulated using
a temp table.


    DECLARE @skip            int,
            @counter         int,
            @idx             int,
            @left_nbr        int,
            @node            varchar(10)

    /*-- Initialize variables --*/
    SELECT @skip    = 1000,   /* Leave gaps in left & right numbers */
           @counter = 0,      /* Counter of next available left number */
           @idx     = 0       /* Index into array */

    /*
     * The following table is used to emulate an array for Sybase,
     * for Oracle this wouldn't be a problem. :(
     */
    CREATE TABLE #a (
        idx          int           NOT NULL,
        node         varchar(10)   NOT NULL,
        left_nbr     int           NOT NULL
    )

    /*
     * I know that I always preach about not using cursors, and there
     * are ways to get around it, but in this case I am more worried
     * about readability over performance.
     */
    DECLARE root_cur CURSOR FOR
      SELECT h.node
        FROM hier h
       WHERE h.parent IS NULL
    FOR READ ONLY

    /*
     * Here we are populating our "stack" with all of the root
     * nodes of the hierarchy.  We are using the cursor in order
     * to assign an increasing index into the "stack"...this could
     * be done using an identity column and a little trickery.
     */
    OPEN root_cur
    FETCH root_cur INTO @node
    WHILE (@@sqlstatus = 0)
    BEGIN
      SELECT @idx = @idx + 1
      INSERT INTO #a VALUES (@idx, @node, 0)
      FETCH root_cur INTO @node
    END
    CLOSE root_cur
    DEALLOCATE CURSOR root_cur

    /*
     * The following cursor will be employed to retrieve all of
     * the children of a given parent.
     */
    DECLARE child_cur CURSOR FOR
      SELECT h.node
        FROM hier h
       WHERE h.parent = @node
    FOR READ ONLY

    /*
     * While our stack is not empty.
     */
    WHILE (@idx > 0)
    BEGIN
      /*
       * Look at the element on the top of the stack.
       */
      SELECT @node      = node,
             @left_nbr  = left_nbr
        FROM #a
       WHERE idx = @idx

      /*
       * If the element at the top of the stack has not been assigned
       * a left number yet, then we assign it one and copy its children
       * on the stack as "nodes to be looked at".
       */
      IF (@left_nbr = 0)
      BEGIN
         /*
          * Set the left number of the current node to be @counter + @skip.
          * Note, we are doing a depth-first traversal, assigning left
          * numbers as we go.
          */
         SELECT @counter  = @counter + @skip
         UPDATE #a
            SET left_nbr  = @counter
          WHERE idx = @idx

         /*
          * Append the children of the current node to the "stack".
          */
         OPEN child_cur
         FETCH child_cur INTO @node
         WHILE (@@sqlstatus = 0)
         BEGIN
            SELECT @idx = @idx + 1
            INSERT INTO #a VALUES (@idx, @node, 0)
            FETCH child_cur INTO @node
         END
         CLOSE child_cur

      END
      ELSE
      BEGIN
         /*
          * It turns out that the current node already has a left
          * number assigned to it, so we just need to assign the
          * right number and update the node in the actual
          * hierarchy.
          */
         SELECT @counter = @counter + @skip

         UPDATE h
            SET left_nbr  = @left_nbr,
                right_nbr = @counter
          WHERE h.node    = @node

         /*
          * "Pop" the current node off our "stack".
          */
         DELETE #a WHERE idx = @idx
         SELECT @idx = @idx - 1
      END
    END /* WHILE (@idx > 0) */
    DEALLOCATE CURSOR child_cur


While reading through this, you should notice that assigning the
left and right numbers to the entire hierarchy is very costly,
especially as the size of the hierarchy grows. If you put the
above code in an insert trigger on the hier table, the overhead
for inserting each node would be phenominal. However, it is possible
to reduce the overall cost of an insertion into the hierarchy.

    1. By leaving huge gaps in the left & right numbers (using the @skip
       variable), you can reduce the circumstances in which the numbers
       need to be reassigned for a given insert. Thus, as long as you can
       squeeze a new node between an existing pair of left and right
       numbers you don't need to do the re-assignment (which could affect
       all of the node in the hierarchy).
    2. By keeping an extra flag around in the hier table to indicate
       which nodes are leaf nodes (this could be maintained with a
       trigger as well), you avoid placing leaf nodes in the array and
       thus reduce the number of updates.
   
   Deletes on this table should never cause the left and right numbers
   to be re-assigned (you could even have a trigger automagically
       re-parent
   orphaned hierarchy nodes).
   



All-in-all, this algorithm is very effective as long as the structure
of the hierarchy does not change very often, and even then, as you can
see, there are ways of getting around a lot of its inefficiencies.


  __________________________________________________________________________
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@corp.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]

----------------------------------------------------------------------

Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo
From: pablo@sgi.com (Pablo Sanchez)
Newsgroups: comp.databases.sybase,comp.answers,news.answers
Subject: Sybase FAQ: 4/8 - section 3
Supersedes: <FAQ.section_3_833665144@sgi.com>
Followup-To: comp.databases.sybase
Date: 1 Jul 1996 14:30:26 GMT
Organization: Silicon Graphics, Inc. Nederland, CO. USA
Lines: 3085
Approved: news-answers-request@MIT.EDU
Message-ID: <FAQ.section_3_836231368@sgi.com>
References: <FAQ.section_2_836231368@sgi.com>
Reply-To: pablo@sgi.com
NNTP-Posting-Host: mew.corp.sgi.com
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly
Originator: pablo@mew.corp.sgi.com
Xref: news1.ucsd.edu comp.databases.sybase:29424 comp.answers:15546 news.answers:62014

Archive-name: databases/sybase-faq/part4
URL: http://reality.sgi.com/pablo/Sybase_FAQ



                Q3.1: SYBASE SQL SERVER PERFORMANCE AND TUNING
                                       
   
     _________________________________________________________________
   
All Components Affect Response Time & Throughput

   We often think that high performance is defined as a fast data server,
   but the picture is not that simple. Performance is determined by all
   these factors:
     * The client application itself:
          + How efficiently is it written?
          + We will return to this later, when we look at application
            tuning.
     * The client-side library:
          + What facilities does it make available to the application?
          + How easy are they to use?
     * The network:
          + How efficiently is it used by the client/server connection?
     * The DBMS:
          + How effectively can it use the hardware?
          + What facilities does it supply to help build efficient fast
            applications?
     * The size of the database:
          + How long does it take to dump the database?
          + How long to recreate it after a media failure?
            
   Unlike some products which aim at performance on paper, Sybase aims at
   solving the multi-dimensional problem of delivering high performance
   for real applications.
   
   _OBJECTIVES_
   
   To gain an overview of important considerations and alternatives for
   the design, development, and implementation of high performance
   systems in the Sybase client/server environment. The issues we will
   address are:
     * Client Application and API Issues
     * Physical Database Design Issues
     * Networking Issues
     * Operating System Configuration Issues
     * Hardware Configuration Issues
     * SQL Server Configuration Issues
       
   
   
     _Client Application and Physical Database Design design decisions
     will account for over 80% of your system's "tuneable" performance so
     ... plan your project resources accordingly ! _
     
   
   
   It is highly recommended that every project include individuals who
   have taken Sybase Education's Performance and Tuning course. This
   5-day course provides the hands-on experience essential for success.
   
   Client Application Issues
     * Tuning Transact-SQL Queries
     * Locking and Concurrency
     * ANSI Changes Affecting Concurrency
     * Application Deadlocking
     * Optimizing Cursors in v10
     * Special Issues for Batch Applications
     * Asynchronous Queries
     * Generating Sequential Numbers
     * Other Application Issues
       
Tuning Transact-SQL Queries

     * Learn the Strengths and Weaknesses of the Optimizer
     * One of the largest factors determining performance is TSQL! Test
       not only for efficient plans but also semantic correctness.
     * Optimizer will cost every permutation of accesses for queries
       involving 4 tables or less. Joins of more than 4 tables are
       "planned" 4-tables at a time (as listed in the FROM clause) so not
       all permutations are evaluated. You can influence the plans for
       these large joins by the order of tables in the FROM clause.
     * Avoid the following, if possible:
          + What are SARGS?
            
            This is short for search arguments. A search argument is
            essentially a constant value such as:
               o "My company name"
               o 3448
        but not:
               o 344 + 88
               o like "%what you want%"
        
            
          + Mathematical Manipulation of SARGs
            
     SELECT name FROM employee WHERE salary * 12 > 100000
          + Use of Incompatible Datatypes Between Column and its _SARG_
            
     Float &Int, Char &Varchar, Binary & Varbinary are Incompatible;
     
     Int &Intn (allow nulls) OK
          + Use of multiple "OR" Statements - especially on different
            columns in same table. If any portion of the OR clause
            requires a table scan, it will! OR Strategy requires
            additional cost of creating and sorting a work table.
          + Not using the leading portion of the index (unless the query
            is completely covered)
          + Substituting "OR" with "IN (value1, value2, ... valueN)
            Optimizer automatically converts this to an "OR"
          + Use of Non-Equal Expressions (!=) in WHERE Clause.
     * Use Tools to Evaluate and Tune Important/Problem Queries
          + Use the "set showplan on" command to see the plan chosen as
            "most efficient" by optimizer. Run all queries through during
            development and testing to ensure accurate access model and
            known performance. Information comes through the Error
            Handler of a DB-Library application.
          + Use the "dbcc traceon(3604, 302, 310)" command to see each
            alternative plan evaluated by the optimizer. Generally, this
            is only necessary to understand why the optimizer won't give
            you the plan you want or need (or think you need)!
          + Use the "set statistics io on" command to see the number of
            logical and physical i/o's for a query. Scrutinize those
            queries with high logical i/o's.
          + Use the "set statistics time on" command to see the amount of
            time (elapsed, execution, parse and compile) a query takes to
            run.
          + If the optimizer turns out to be a "pessimizer", use the "set
            forceplan on" command to change join order to be the order of
            the tables in the FROM clause.
          + If the optimizer refuses to select the proper index for a
            table, you can force it by adding the index id in parentheses
            after the table name in the FROM clause.
            
     SELECT * FROM orders(2), order_detail(1) WHERE ...
     
     _This may cause portability issues should index id's vary/change by
     site ! _
     
Locking and Concurrency

     * The Optimizer Decides on Lock Type and Granularity
     * Decisions on lock type (share, exclusive, or update) and
       granularity (page or table) are made during optimization so make
       sure your updates and deletes don't scan the table !
     * Exclusive Locks are Only Released Upon Commit or Rollback
     * Lock Contention can have a large impact on both throughput and
       response time if not considered both in the application and
       database design !
     * Keep transactions as small and short as possible to minimize
       blocking. Consider alternatives to "mass" updates and deletes such
       as a v10.0 cursor in a stored procedure which frequently commits.
     * Never include any "user interaction" in the middle of
       transactions.
     * Shared Locks Generally Released After Page is Read
     * Share locks "roll" through result set for concurrency. Only
       "HOLDLOCK" or "Isolation Level 3" retain share locks until commit
       or rollback. Remember also that HOLDLOCK is for read-consistency.
       It doesn't block other readers !
     * Use optimistic locking techniques such as timestamps and the
       tsequal() function to check for updates to a row since it was read
       (rather than holdlock)
       
ANSI Changes Affecting Concurrency

     * Chained Transactions Risk Concurrency if Behavior not Understood
     * Sybase defaults each DML statement to its own transaction if not
       specified ;
     * ANSI automatically begins a transaction with any SELECT, FETCH,
       OPEN, INSERT, UPDATE, or DELETE statement ;
     * If Chained Transaction must be used, extreme care must be taken to
       ensure locks aren't left held by applications unaware they are
       within a transaction! This is especially crucial if running at
       Level 3 Isolation
     * Lock at the Level of Isolation Required by the Query
     * Read Consistency is NOT a requirement of every query.
     * Choose level 3 only when the business model requires it
     * Running at Level 1 but selectively applying HOLDLOCKs as needed is
       safest
     * If you must run at Level 3, use the NOHOLDLOCK clause when you can
       !
     * Beware of (and test) ANSI-compliant third-party applications for
       concurrency
       
Application Deadlocking

   Prior to SQL Server 10 cursors, many developers simulated cursors by
   using two or more connections (dbproc's) and divided the processing
   between them. Often, this meant one connection had a SELECT open while
   "positioned" UPDATEs and DELETEs were issued on the other connection.
   The approach inevitably leads to the following problem:
    1. Connection A holds a share lock on page X (remember "Rows Pending"
       on SQL Server leave a share lock on the "current" page).
    2. Connection B requests an exclusive lock on the same page X and
       waits...
    3. The APPLICATION waits for connection B to succeed before invoking
       whatever logic will remove the share lock (perhaps dbnextrow). Of
       course, that never happens ...
       
   Since Connection A never requests a lock which Connection B holds,
   this is NOT a true server-side deadlock. It's really an "application"
   deadlock !
   
  Design Alternatives
    1. Buffer additional rows in the client that are "nonupdateable".
       This forces the shared lock onto a page on which the application
       will not request an exclusive lock.
    2. Re-code these modules with CT-Library cursors (aka. server-side
       cursors). These cursors avoid this problem by disassociating
       command structures from connection structures.
    3. Re-code these modules with DB-Library cursors (aka. client-side
       cursors). These cursors avoid this problem through buffering
       techniques and re-issuing of SELECTs. Because of the re-issuing of
       SELECTs, these cursors are not recommended for high transaction
       sites !
       
Optimizing Cursors with v10.0

     * Always Declare Cursor's Intent (i.e. Read Only or Updateable)
     * Allows for greater control over concurrency implications
     * If not specified, SQL Server will decide for you and usually
       choose updateable
     * Updateable cursors use UPDATE locks preventing other U or X locks
     * Updateable cursors that include indexed columns in the update list
       may table scan
     * SET Number of Rows for each FETCH
     * Allows for greater Network Optimization over ANSI's 1- row fetch
     * Rows fetched via Open Client cursors are transparently buffered in
       the client:

                FETCH  -> Open Client < N rows
                               Buffers
     * Keep Cursor Open on a Commit / Rollback
     * ANSI closes cursors with each COMMIT causing either poor
       throughput (by making the server re-materialize the result set) or
       poor concurrency (by holding locks)
     * Open Multiple Cursors on a Single Connection
     * Reduces resource consumption on both client and Server
     * Eliminates risk of a client-side deadlocks with itself
       
Special Issues for Batch Applications

   SQL Server was not designed as a batch subsystem! It was designed as
   an RBDMS for large multi-user applications. Designers of
   batch-oriented applications should consider the following design
   alternatives to maximize performance :
   
   Design Alternatives :
     * Minimize Client/Server Interaction Whenever Possible
     * Don't turn SQL Server into a "file system" by issuing single table
       / single row requests when, in actuality, set logic applies.
     * Maximize TDS packet size for efficient Interprocess Communication
       (v10 only)
     * New SQL Server 10.0 cursors declared and processed entirely within
       stored procedures and triggers offer significant performance gains
       in batch processing.
     * Investigate Opportunities to Parallelize Processing
     * Breaking up single processes into multiple, concurrently
       executing, connections (where possible) will outperform single
       streamed processes everytime.
     * Make Use of TEMPDB for Intermediate Storage of Useful Data
       
Asynchronous Queries

   Many, if not most, applications and 3rd Party tools are coded to send
   queries with the DB-Library call dbsqlexec( ) which is a synchronous
   call ! It sends a query and then waits for a response from SQL Server
   that the query has completed !
   
   Designing your applications for asynchronous queries provides many
   benefits:
    1. A "Cooperative" multi-tasking application design under Windows
       will allow users to run other Windows applications while your long
       queries are processed !
    2. Provides design opportunities to parallize work across multiple
       SQL Server connections.
       
   
   
   Implementation Choices:
     * System 10 Client Library Applications:
     * True asynchronous behaviour is built into the entire library.
       Through the appropriate use of call-backs, asynchronous behavior
       is the normal processing paradigm.
     * Windows DB-Library Applications (not true async but polling for
       data):
     * Use dbsqlsend(), dbsqlok(), and dbdataready() in conjunction with
       some additional code in WinMain() to pass control to a background
       process. Code samples which outline two different Windows
       programming approaches (a PeekMessage loop and a Windows Timer
       approach) are available in the Microsoft Software Library on
       Compuserve (GO MSL). Look for _SQLBKGD.ZIP_
     * Non-PC DB-Library Applications (not true async but polling for
       data):
     * Use dbsqlsend(), dbsqlok(), and dbpoll() to utilize non-blocking
       functions.
       
Generating Sequential Numbers

   Many applications use unique sequentially increasing numbers, often as
   primary keys. While there are good benefits to this approach,
   generating these keys can be a serious contention point if not
   careful. For a complete discussion of the alternatives, download
   Malcolm Colton's White Paper on Sequential Keys from the SQL Server
   Library of our OpenLine forum on Compuserve.
   
   The two best alternatives are outlined below.
    1. "Primary Key" Table Storing Last Key Assigned
          + Minimize contention by either using a seperate "PK" table for
            each user table or padding out each row to a page. Make sure
            updates are "in-place".
          + Don't include the "PK" table's update in the same transaction
            as the INSERT. It will serialize the transactions.

      _BEGIN TRAN_

                UPDATE pk_table SET nextkey = nextkey + 1
                [WHERE table_name = @tbl_name]
      _COMMIT TRAN_

      /* Now retrieve the information */
      SELECT nextkey FROM pk_table
      WHERE table_name = @tbl_name]

          + "Gap-less" sequences require additional logic to store and
            retrieve rejected values
    2. IDENTITY Columns (v10.0 only)
          + Last key assigned for each table is stored in memory and
            automatically included in all INSERTs (BCP too). This should
            be the method of choice for performance.
          + Choose a large enough numeric or else all inserts will stop
            once the max is hit.
          + Potential rollbacks in long transactions may cause gaps in
            the sequence !
            
Other Application Issues

     * Transaction Logging Can Bottleneck Some High Transaction
       Environments
     * Committing a Transaction Must Initiate a Physical Write for
       Recoverability
     * Implementing multiple statements as a transaction can assist in
       these environment by minimizing the number of log writes (log is
       flushed to disk on commits).
     * Utilizing the Client Machine's Processing Power Balances Load
     * Client/Server doesn't dictate that everything be done on Server!
     * Consider moving "presentation" related tasks such as string or
       mathematical manipulations, sorting, or, in some cases, even
       aggregating to the client.
     * Populating of "Temporary" Tables Should Use "SELECT _INTO"_ -
       balance this with dynamic creation of temporary tables in an OLTP
       environment. Dynamic creation may cause blocks in your tempdb.
     * "SELECT INTO" operations are not logged and thus are significantly
       faster than there INSERT with a nested SELECT counterparts.
     * Consider Porting Applications to Client Library Over Time
     * True Asynchronous Behavior Throughout Library
     * Array Binding for SELECTs
     * Dynamic SQL
     * Support for ClientLib-initiated callback functions
     * Support for Server-side Cursors
     * Shared Structures with Server Library (Open Server 10)
       
Physical Database Design Issues

     * Normalized -vs- Denormalized Design
     * Index Selection
     * Promote "Updates-in-Place" Design
     * Promote Parallel I/O Opportunities
       
Normalized -vs- Denormalized

     * Always Start with a Completely Normalized Database
     * Denormalization should be an optimization taken as a result of a
       performance problem
     * Benefits of a normalized database include :
         1. Accelerates searching, sorting, and index creation since
            tables are narrower
         2. Allows more clustered indexes and hence more flexibility in
            tuning queries, since there are more tables ;
         3. Accelerates index searching since indexes tend to be narrower
            and perhaps shorter ;
         4. Allows better use of segments to control physical placement
            of tables ;
         5. Fewer indexes per table, helping UPDATE, INSERT, and DELETE
            performance ;
         6. Fewer NULLs and less redundant data, increasing compactness
            of the database ;
         7. Accelerates trigger execution by minimizing the extra
            integrity work of maintaining redundant data.
         8. Joins are Generally Very Fast Provided Proper Indexes are
            Available
         9. Normal caching and cindextrips parameter (discussed in Server
            section) means each join will do on average only 1-2 physical
            I/Os.
        10. Cost of a logical I/O (get page from cache) only 1-2
            milliseconds.
            
   There Are Some Good Reasons to Denormalize
    1. All queries require access to the "full" set of joined data.
    2. Majority of applications scan entire tables doing joins.
    3. Computational complexity of derived columns require storage for
       SELECTs
    4. Others ...
       
Index Selection

     * Without a clustered index, all INSERTs and "out-of-place" UPDATEs
       go to the last page. The lock contention in high transaction
       environments would be prohibitive. This is also true for INSERTs
       to a clustered index on a monotonically increasing key.
     * High INSERT environments should always cluster on a key which
       provides the most "randomness" (to minimize lock / device
       contention) that is usable in many queries. Note this is generally
       not your primary key !
     * Prime candidates for clustered index (in addition to the above)
       include :
          + Columns Accessed by a Range
          + Columns Used with Order By, Group By, or Joins
     * Indexes Help SELECTs and Hurt INSERTs
     * Too many indexes can significantly hurt performance of INSERTs and
       "out-of-place" UPDATEs.
     * Prime candidates for nonclustered indexes include :
          + Columns Used in Queries Requiring Index Coverage
          + Columns Used to Access Less than 20% (rule of thumb) of the
            Data.
     * Unique indexes should be defined as UNIQUE to help the optimizer
     * Minimize index page splits with Fillfactor (helps concurrency and
       minimizes deadlocks)
     * Keep the Size of the Key as Small as Possible
     * Accelerates index scans and tree traversals
     * Use small datatypes whenever possible . Numerics should also be
       used whenever possible as they compare faster than strings.
       
Promote "Update-in-Place" Design

     * "Update-in-Place" Faster by Orders of Magnitude
     * Performance gain dependent on number of indexes. Recent benchmark
       (160 byte rows, 1 clustered index and 2 nonclustered) showed 800%
       difference!
     * Alternative ("Out-of-Place" Update) implemented as a physical
       DELETE followed by a physical INSERT. These tactics result in:
         1. Increased Lock Contention
         2. Increased Chance of Deadlock
         3. Decreased Response Time and Throughput
     * Currently (System 10 and below), Rules for "Update-in-Place"
       Behavior Include :
         1. Columns updated can not be variable length or allow nulls
         2. Columns updated can not be part of an index used to locate
            the row to update
         3. No update trigger on table being updated (because the
            inserted and deleted tables used in triggers get their data
            from the log)
            
     In v4.9.x and below, only one row may be affected and the optimizer
     must know this in advance by choosing a UNIQUE index. System 10
     eliminated this limitation.
     
Promote Parallel I/O Opportunities

     * For I/O-bound Multi-User Systems, Use A lot of Logical and
       Physical Devices
     * Plan balanced separation of objects across logical and physical
       devices.
     * Increased number of physical devices (including controllers)
       ensures physical bandwidth
     * Increased number of logical Sybase devices ensures minimal
       contention for internal resources. Look at SQL Monitor's Device
       I/O Hit Rate for clues. Also watch out for the 128 device limit
       per database.
     * Create Database (in v10) starts parallel I/O on up to 6 devices at
       a time concurrently. If taken advantage of, expect an 800%
       performance gain. A 2Gb TPC-B database that took 4.5 hours under
       4.9.1 to create now takes 26 minutes if created on 6 independent
       devices !
     * Use Sybase Segments to Ensure Control of Placement
       
     This is the only way to guarantee logical seperation of objects on
     devices to reduce contention for internal resources.
     * Dedicate a seperate physical device and controller to the
       transaction log in tempdb too.
     * optimize TEMPDB Also if Heavily Accessed
     * increased number of logical Sybase devices ensures minimal
       contention for internal resources.
     * systems requiring increased log throughput today must partition
       database into separate databases
       
       Breaking up one logical database into multiple smaller databases
       increases the number number of transaction logs working in
       parallel.
       
Networking Issues

     * Choice of Transport Stacks
     * Variable Sized TDS Packets
     * TCP/IP Packet Batching
       
  Choice of Transport Stacks for PCs
     * Choose a Stack that Supports "Attention Signals" (aka. "Out of
       Band Data")
     * Provides for the most efficient mechanism to cancel queries.
     * Essential for sites providing ad-hoc query access to large
       databases.
     * Without "Attention Signal" capabilities (or the urgent flag in the
       connection string), the DB-Library functions DBCANQUERY ( ) and
       DBCANCEL ( ) will cause SQL Server to send all rows back to the
       Client DB-Library as quickly as possible so as to complete the
       query. This can be very expensive if the result set is large and,
       from the user's perspective, causes the application to appear as
       though it has hung.
     * With "Attention Signal" capabilities, Net-Library is able to send
       an out-of-sequence packet requesting the SQL Server to physically
       throw away any remaining results providing for instantaneous
       response.
     * Currently, the following network vendors and associated protocols
       support the an "Attention Signal" capable implementation:
         1. NetManage NEWT
         2. FTP TCP
         3. Named Pipes (10860) - Do not use urgent parameter with this
            Netlib
         4. Novell LAN Workplace v4.1 0 Patch required from Novell
         5. Novell SPX - Implemented internally through an "In-Band"
            packet
         6. Wollongong Pathway
         7. Microsoft TCP - Patch required from Microsoft
            
  Variable-sized TDS Packets
  
   Pre-v4.6 TDS Does Not Optimize Network Performance Current SQL Server
   TDS packet size limited to 512 bytes while network frame sizes are
   significantly larger (1508 bytes on Ethernet and 4120 bytes on Token
   Ring).
   
     The specific protocol may have other limitations!
     
   For example:
     * IPX is limited to 576 bytes in a routed network.
     * SPX requires acknowledgement of every packet before it will send
       another. A recent benchmark measured a 300% performance hit over
       TCP in "large" data transfers (small transfers showed no
       difference).
     * Open Client Apps can "Request" a Larger Packet Shown to have
       significant performance improvement on "large" data transfers such
       as BCP, Text / Image Handling, and Large Result Sets.
          + clients:
               o isql -Usa -Annnnn
               o bcp -Usa -Annnnn
               o ct_con_props (connection, CS_SET, CS_PACKETSIZE,
                 &packetsize, sizeof(packetsize), NULL)
          + An "SA" must Configure each Servers' Defaults Properly
               o sp_configure "default packet size", nnnnn - Sets default
                 packet size per client connection (defaults to 512)
               o sp_configure "maximum packet size", nnnnn - Sets maximum
                 TDS packet size per client connection (defaults to 512)
               o sp_configure "additional netmem", nnnnn - Additional
                 memory for large packets taken from separate pool. This
                 memory does not come from the sp_configure memory
                 setting.
                 
                 Optimal value = ((# connections using large packets
                 large packetsize * 3) + an additional 1-2% of the above
                 calculation for overhead)
                 
                 Each connection using large packets has 3 network
                 buffers: one to read; one to write; and one overflow.
                 
                    # Default network memory - Default-sized packets come
                      from this memory pool.
                    # Additional Network memory - Big packets come this
                      memory pool.
                      
                      If not enough memory is available in this pool, the
                      server will give a smaller packet size, down to the
                      default
                      
  TCP/IP Packet Batching
     * TCP Networking Layer Defaults to "Packet Batching"
     * This means that TCP/IP will batch small logical packets into one
       larger physical packet by briefly delaying packets in an effort to
       fill the physical network frames (Ethernet, Token-Ring) with as
       much data as possible.
     * Designed to improve performance in terminal emulation environments
       where there are mostly only keystrokes being sent across the
       network.
     * Some Environments Benefit from Disabling Packet Batching
     * Applies mainly to socket-based networks (BSD) although we have
       seen some TLI networks such as NCR's benefit.
     * Applications sending very small result sets or statuses from
       sprocs will usually benefit. Benchmark with your own application
       to be sure.
     * This makes SQL Server open all connections with the TCP_NODELAY
       option. Packets will be sent regardless of size.
     * To disable packet batching, in pre-Sys 11, start SQL Server with
       the 1610 Trace Flag.
       
     $SYBASE/dataserver -T1610 -d /usr/u/sybase/master.dat ...
   
       
       Your errorlog will indicate the use of this option with the
       message:
       
     SQL Server booted with TCP_NODELAY enabled.
     
Operating System Issues

     * Never Let SQL Server Page Fault
     * It is better to configure SQL Server with less memory and do more
       physical database I/O than to page fault. OS page faults are
       synchronous and stop the entire dataserver engine until the page
       fault completes. Since database I/O's are asynchronous, other user
       tasks can continue!
     * Use Process Affinitying in SMP Environments, if Supported
     * Affinitying dataserver engines to specific CPUs minimizes overhead
       associated with moving process information (registers, etc)
       between CPUs. Most implementations will preference other tasks
       onto other CPUs as well allowing even more CPU time for dataserver
       engines.
     * Watch out for OS's which are not fully symmetric. Affinitying
       dataserver engines onto CPUs that are heavily used by the OS can
       seriously degrade performance. Benchmark with your application to
       find optimal binding.
     * Increase priority of dataserver engines, if supported
     * Give SQL Server the opportunity to do more work. If SQL Server has
       nothing to do, it will voluntarily yield the CPU.
     * Watch out for OS's which externalize their async drivers. They
       need to run too!
     * Use of OS Monitors to Verify Resource Usage
     * The OS CPU monitors only "know" that an instruction is being
       executed. With SQL Server's own threading and scheduling, it can
       routinely be 90% idle when the OS thinks its 90% busy. SQL Monitor
       shows real CPU usage.
     * Look into high disk I/O wait time or I/O queue lengths. These
       indicate physical saturation points in the I/O subsystem or poor
       data distribution.
     * Disk Utilization above 50% may be subject to queuing effects which
       often manifest themselves as uneven response times.
     * Look into high system call counts which may be symptomatic of
       problems.
     * Look into high context switch counts which may also be symptomatic
       of problems.
     * Optimize your kernel for SQL Server (minimal OS file buffering,
       adequate network buffers, appropriate KEEPALIVE values, etc).
     * Use OS Monitors and SQL Monitor to Determine Bottlenecks
     * Most likely "Non-Application" contention points include:

   Resource                    Where to Look
   ---------                   --------------
   CPU Performance             SQL Monitor - CPU and Trends

   Physical I/O Subsystem      OS Monitoring tools - iostat, sar...

   Transaction Log             SQL Monitor - Device I/O and
                                             Device Hit Rate
                                             on Log Device

   SQL Server Network Polling  SQL Monitor - Network and Benchmark
                                             Baselines

   Memory                      SQL Monitor - Data and Cache
                                             Utilization
                                        
     * Use of Vendor-support Striping such as LVM and RAID
     * These technologies provide a very simple and effective mechanism
       of load balancing I/O across physical devices and channels.
     * Use them provided they support asynchronous I/O and reliable
       writes.
     * These approaches do not eliminate the need for Sybase segments to
       ensure minimal contention for internal resources.
     * Non-read-only environments should expect performance degradations
       when using RAID levels other than level 0. These levels all
       include fault tolerance where each write requires additional reads
       to calculate a "parity" as well as the extra write of the parity
       data.
       
Hardware Configuration Issues

     * Number of CPUs
     * Use information from SQL Monitor to assess SQL Server's CPU usage.
     * In SMP environments, dedicate at least one CPU for the OS.
     * Advantages and scaling of VSA is application-dependent. VSA was
       architected with large multi-user systems in mind.
     * I/O Subsystem Configuration
     * Look into high Disk I/O Wait Times or I/O Queue Lengths. These may
       indicate physical I/O saturation points or poor data distribution.
     * Disk Utilization above 50% may be subject to queuing effects which
       often manifest themselves as uneven response times.
     * Logical Volume configurations can impact performance of operations
       such as create database, create index, and bcp. To optimize for
       these operations, create Logical Volumes such that they start on
       different channels / disks to ensure I/O is spread across
       channels.
     * Discuss device and controller throughput with hardware vendors to
       ensure channel throughput high enough to drive all devices at
       maximum rating.
       
General SQL Server Tuning

     * Changing Values with sp_configure or buildmaster
       
     _It is imperative that you only use sp_configure to change those
     parameters that it currently maintains because the process of
     reconfiguring actually recalculates a number of other buildmaster
     parameters. Using the Buildmaster utility to change a parameter
     "managed" by sp_configure may result in a mis-configured server and
     cause adverse performance or even worse ... _
     * Sizing Procedure Cache
          + SQL Server maintains an MRU-LRU chain of stored procedure
            query plans. As users execute sprocs, SQL Server looks in
            cache for a query plan to use. However, stored procedure
            query plans are currently not re-entrant! If a query plan is
            available, it is placed on the MRU and execution begins. If
            no plan is in memory, or if all copies are in use, a new copy
            is read from the sysprocedures table. It is then optimized
            and put on the MRU for execution.
          + Use dbcc memusage to evaluate the size and number of each
            sproc currently in cache. Use SQL Monitor's cache statistics
            to get your average cache hit ratio. Ideally during
            production, one would hope to see a high hit ratio to
            minimize the procedure reads from disk. Use this information
            in conjuction with your desired hit ratio to calculate the
            amount of memory needed.
     * Memory
          + Tuning memory is more a price/performance issue than anything
            else ! The more memory you have available, the greater than
            probability of minimizing physical I/O. This is an important
            goal though. Not only does physical I/O take significantly
            longer, but threads doing physical I/O must go through the
            scheduler once the I/O completes. This means that work on
            behalf of the thread may not actually continue to execute for
            quite a while !
          + There are no longer (as of v4.8) any inherent limitations in
            SQL Server which cause a point of diminishing returns on
            memory size.
          + Calculate Memory based on the following algorithm :
            
     Total Memory = Dataserver Executable Size (in bytes) +
     Static Overhead of 1 Mb +
     User Connections x 40,960 bytes +
     Open Databases x 644 bytes +
     Locks x 32 bytes +
     Devices x 45,056 bytes +
     Procedure Cache +
     Data Cache
     * Recovery Interval
          + As users change data in SQL Server, only the transaction log
            is written to disk right away for recoverability. "Dirty"
            data and index pages are kept in cache and written to disk at
            a later time. This provides two major benefits:
              1. Many transactions may change a page yet only one
                 physical write is done
              2. SQL Server can schedule the physical writes "when
                 appropriate"
          + SQL Server must eventually write these "dirty" pages to disk.
          + A checkpoint process wakes up periodically and "walks" the
            cache chain looking for dirty pages to write to disk
          + The recovery interval controls how often checkpoint writes
            dirty pages.
     * Tuning Recovery Interval
          + A low value may cause unnecessary physical I/O lowering
            throughput of the system. Automatic recovery is generally
            much faster during boot-up.
          + A high value minimizes unnecessary physical I/O and helps
            throughput of the system. Automatic recovery may take
            substantial time during boot-up.
            
Audit Performance Tuning for v10.0

     * Potentially as Write Intensive as Logging
     * Isolate Audit I/O from other components.
     * Since auditing nearly always involves sequential writes, RAID
       Level 0 disk striping or other byte-level striping technology
       should provide the best performance (theoretically).
     * Size Audit Queue Carefully
     * Audit records generated by clients are stored in an in memory
       audit queue until they can be processed.
     * Tune the queue's size with sp_configure "audit queue size", nnnn
       (in rows).
     * Sizing this queue too small will seriously impact performance
       since all user processes who generate audit activity will sleep if
       the queue fills up.
     * Size Audit Database Carefully
     * Each audit row could require up to 416 bytes depending on what is
       audited.
     * Sizing this database too small will seriously impact performance
       since all user processes who generate audit activity will sleep if
       the database fills up.
       
   
     _________________________________________________________________

                          Q3.2: TEMP TABLES AND OLTP
                                       
   
     _________________________________________________________________
   
   
   
   
   Our shop would like to inform folks of a potential problem when using
   _temporary tables in an OLTP environment._ Using temporary tables
   dynamically in a OLTP production environment may result in blocking
   (single-threading) as the number of transactions using the temporary
   tables increases.
   
  Does it affect my application?
  
   
   
   This warning only applies for SQL, that is being invoked frequently in
   an OLTP production environment, where the use of _"select into..." or
   "create table #temp"_ is common. Application using temp tables may
   experience blocking problems as the number of transactions increases.
   
   This warning does not apply to SQL that may be in a report or that is
   not used frequently. _Frequently_ is defined as several times per
   second.
   
  Why? Why? Why?
  
   
   
   Our shop was working with an application owner to chase down a problem
   they were having during peak periods. The problem they were having was
   severe blocking in tempdb.
   
   What was witnessed by the DBA group was that as the number of
   transactions increased on this particular application, the number of
   blocks in tempdb also increased.
   
   We ran some independent tests to simulate a heavily loaded server and
   discovered that the data pages in contention were in tempdb's
   _syscolumns'_ table.
   
   This actually makes sense because during table creation entries are
   added to this table, regardless if it's a temporary or permanent
   table.
   
   We ran another simulation where we created the tables before the
   stored procedure used it and the blocks went away. We then performed
   an additional test to determine what impact creating temporary tables
   dynamically would have on the server and discovered that there is a
   33% performance gain by creating the tables once rather than
   re-creating them.
   
   Your mileage may vary.
   
  How do I fix this?
  
   
   
   To make things better, do the 90's thing -- _reduce and reuse your
   temp tables._ During one application connection/session, aim to create
   the temp tables only once.
   
   Let's look at the lifespan of a temp table. If temp tables are created
   in a batch within a connection, then all future batches and stored
   procs will have access to such temp tables until they're dropped; this
   is the reduce and reuse strategy we recommend. However, if temp tables
   are created in a stored proc, then the database will drop the temp
   tables when the stored proc ends, and this means repeated and multiple
   temp table creations; you want to avoid this.
   
   Recode your stored procedures so that they assume that the temporary
   tables already exist, and then alter your application so that it
   creates the temporary tables at start-up -- once and not every time
   the stored procedure is invoked.
   
   That's it! Pretty simple eh?
   
  Summary
  
   
   
   The upshot is that you can realize roughly a 33% performance gain and
   not experience the blocking which is difficult to quantify due to the
   specificity of each application.
   
   Basically, you cannot lose.
   
  Solution in pseudo-code
  
   If you have an application that creates the same temp table many times
   within one connection, here's how to convert it to reduce and reuse
   temp table creations. Raymond Lew has supplied a detailed example for
   trying this. 
   
    Old

open connection
  loop until time to go
    exec procedure vavoom_often
      /* vavoom_often creates and uses #gocart for every call */
      /* eg: select * into #gocart from gocart */
    go
          .
          .
          .
  loop-end
close connection

    New

open connection
  /* Create the temporary table outside of the sproc */
  select * into #gocart from gocart where 1 =2 ;
  go
  loop until time to go
    exec procedure vavoom_often
      /* vavoom_often reuses #gocart which */
      /*   was created before exec of vavoom_often */
      /* - First statement may be a truncate table #gocart */
      /* - Execute _with recompile_ */
      /*   if your table will have more than 10 data pages */
      /*   as the optimizer will assume 10 data pages for temp tables */
    go
          .
          .
          .
  loop-end
close connection

   
   
   Note that it is necessary to call out the code to create the table and
   it becomes a pain in the butt because the create-table statement will
   have to be replicated in any stored proc and in the initialization
   part of the application - this can be a maintenance nuisance. This can
   be solved by using any macro package such as _m4_ or _cpp_. or by
   using and adapting the scripts from Raymond Lew.
     _________________________________________________________________
   
   
   
   From: Raymond Lew
   
   At our company, we try to keep the database and the application
   loosely coupled to allow independent changes at the frontend or the
   backend as long as the interface stays the same. Embedding temp table
   definitions in the frontend would make this more difficult.
   
   To get away from having to embed the temp table definitions in the
   frontend code, we are storing the temp table definitions in the
   database. The frontend programs retrieve the definitions and declare
   the tables dynamically at the beginning of each session. This allows
   for the change of backend procedures without changes in the frontend
   when the API does not change.
   
   Enclosed below are three scripts. The first is an isql script to
   create the tables to hold the definitions. The second is a shell
   script to set up a sample procedure named vavoom. The third is shell
   script to demonstrate the structure of application code.
   
   I would like to thank Charles Forget and Gordon Rees for their
   assistance on these scripts.

--start of setup------------------------------------------------------
/* Raymond Lew - 1996-02-20 */
/* This isql script will set up the following tables:
   gocart - sample table
   app_temp_defn - where temp table definitions are stored
   app_temp_defn_group - a logical grouping of temp table definitions
                         for an application function
*/

/******************************/
/* gocart table - sample table*/
/******************************/
drop table gocart
go
create table gocart
(
  cartname    char(10) null
 ,cartcolor   char(30) null
)
go
create unique clustered index  gocart1 on gocart (cartname)
go
insert into gocart values ('go1','blue ')
insert into gocart values ('go2','pink ')
insert into gocart values ('go3','green ')
insert into gocart values ('go4','red ')
go


/****************************************************************/
/* app_temp_defn - definition of temp tables with their indexes */
/****************************************************************/
drop table app_temp_defn
go
create table app_temp_defn
(
  /* note: temp tables are unique only in first 13 chars */
  objectname  char(20)     not null
 ,seq_no      smallint     not null
 ,defntext    char(255)    not null
)
go
create unique clustered index  app_temp_defn1
  on app_temp_defn (objectname,seq_no)
go
insert into app_temp_defn
values ('#gocart',1,'select * into #gocart')
insert into app_temp_defn
values ('#gocart',2,' from gocart where 1=2 ')
go
insert into app_temp_defn
values ('#gocartindex',1,
 "create unique index gocartindex on #gocart (cartname) ")
go
insert into app_temp_defn
values ('#gocart1',1, 'select * into #gocart1  from gocart where 1=2')
go




/***********************************************************************/
/* app_temp_defn_group - groupings of temp definitions by applications */
/***********************************************************************/
drop table app_temp_defn_group
go
create table app_temp_defn_group
(
  appname     char(8)  not null
 ,objectname  char(20) not null
)
go
create unique clustered index  app_temp_defn_group1
 on app_temp_defn_group (appname,objectname)
go
insert into app_temp_defn_group values('abc','#gocart')
insert into app_temp_defn_group values('abc','#gocartindex')
go



/***********************************************************/
/* get_temp_defn - proc for getting the temp defn by group */
/***********************************************************/
drop procedure get_temp_defn
go
create procedure get_temp_defn
(
@appname               char(8)
)
as

if @appname = ''
  select defntext
    from app_temp_defn
    order by objectname, seq_no
else
  select defntext
    from app_temp_defn a
       , app_temp_defn_group b
   where a.objectname = b.objectname
     and b.appname = @appname
   order by a.objectname, a.seq_no

return
go

/* let's try some tests */
exec get_temp_defn ''
go
exec get_temp_defn 'abc'
go
--end of setup      --------------------------------------------------






--- start of make.vavoom --------------------------------------------
#!/bin/sh
# Raymond Lew - 1996-02-20
#
# bourne shell script for creating stored procedures using
# app_temp_defn table
#
# demo procedure vavoom created here
#
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
#       check out the notes within the body


# get the table defn's into a text file
#
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug  -w255 <lt eot \
| grep -v '\-\-\-\-' | grep -v 'defntext  ' | grep -v ' affected' >tabletext
exec get_temp_defn ''
go
eot
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

#
# create the proc vavoom after running the temp defn's into db
#
isql -Ukryten -Pjollyguy -Sstarbug  -e <lt eot |more
`cat tabletext`
go
drop procedure vavoom
go
create procedure vavoom
(
@color               char(10)
)
as
truncate table #gocart1 /* who knows what lurks in temp tables */
if @color = ''
  insert #gocart1 select * from gocart
else
  insert #gocart1 select * from gocart where cartcolor=@color
select @color '@color', * from #gocart1
return
go
exec vavoom ''
go
exec vavoom 'blue'
go
eot
# note: prev line :you will need to have a newline immediately after eot

exit
# end of unix script
---   end of make.vavoom --------------------------------------------





--- start of defntest.sh -------------------------------------------
#!/bin/sh
# Raymond Lew 1996-02-01
#
# test script: demonstrate with a bourne shell how an application
# would use the temp table definitions stored in the database
#
# note: you must run setup and make.vavoom first
#
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
#       check out the notes within the body

# get the table defn's into a text file
#
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug  -w255 <lt eot \
| grep -v '\-\-\-\-' | grep -v 'defntext  ' | grep -v ' affected' >tabletext
exec get_temp_defn ''
go
eot
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

isql -Ukryten -Pjollyguy -Sstarbug   -e <lt eot | more
`cat tabletext`
go
exec vavoom ''
go
exec vavoom 'blue'
go
eot
# note: prev line :you will need to have a newline immediately after eot

exit
# end of unix script
---   end of defntest.sh -------------------------------------------

   
   That's all, folks. Have Fun
     _________________________________________________________________

           Q3.3: DIFFERENCES BETWEEN _CLUSTERED_ AND _NON-CLUSTERED_
                                       
   
     _________________________________________________________________
   
Preface

   I'd like to talk about the difference between a clustered and a
   non-clustered index. The two are _very_ different and it's very
   important to understand the difference between the two to in order to
   know when and how to use each.
   
   I've pondered hard to find the best analogy that I could think of and
   I've come up with ... the phone book. Yes, a phone book.
   
   Imagine that each page in our phone book is equivalent to a Sybase 2K
   data page. Every time we read a page from our phone book it is
   equivalent to one disk I/O.
   
   Since we are imagining, let's also imagine that our mythical SQL
   Server (that runs against the phone book) has only enough data cache
   to buffer 200 phone pages. When our data cache gets full we have to
   flush an old page out so we can read in a new one.
   
   Fasten your seat belts, because here we go...
   
Clustered Index

   A phone book lists everyone by last name. We have an _A_ section, we
   have a _B_ section and so forth. Within each section my phone book is
   clever enough to list the starting and ending names for the given
   page.
   
   The phone book is clustered by last name.
   
     create clustered index on phone_book (last_name)
     
   It's fast to perform the following queries on the phone book:
     * Find the address of those whose last name is _Cisar_.
     * Find the address of those whose last name is between _Even_ and
       _Fa_
       
   Searches that don't work well:
     * Find the address of those whose phone number is _440-1300_.
     * Find the address of those whose prefix is _440_
       
   In order to determine the answer to the two above we'd have to search
   the entire phone book. We can call that a table scan.
   
Non-Clustered Index

   To help us solve the problem above we can build a non-clustered index.
   
     create nonclustered index on phone_book (phone_number)
     
   Our non-clustered index will be built and maintained by our Mythical
   SQL Server as follows:
    1. Create a data structure that will house a _phone_number_ and
       information where the _phone_number_ exists in the phone book:
       page number and the row within the page.
       
       The phone numbers will be kept in ascending order.
    2. Scan the _entire_ phone book and add an entry to our data
       structure above for _each_ phone number found.
    3. For each phone number found, note along side it the page number
       that it was located _and_ which row it was in.
       
   any time we insert, update or delete new numbers, our M-SQL Server
   will maintain this secondary data structure. It's such a nice Server.
   
   Now when we ask the question:
   
     Find the address of those whose phone number is _440-1300_
     
   we don't look at the phone book directly but go to our new data
   structure and it tells us which page and row within the page the above
   phone number can be found. Neat eh?
   
   Draw backs? Well, yes. Because we _probably_ still can't answer the
   question:
   
     Find the address of those whose prefix is _440_
     
   This is because of the data structure being used to implement
   non-clustered indexes. The structure is a list of ordered values
   (phone numbers) which point to the actual data in the phone book. This
   indirectness can lead to trouble when a range or a match query is
   issued.
   
   The structure may look like this:
   

------------------------------------
|Phone Number   |  Page Number/Row |
====================================
| 440-0000      |  300/23          |
| 440-0001      |  973/45          |
| 440-0002      |   23/2           |
| ...           |                  |
| 440-0030      |  973/45          |
| 440-0031      |  553/23          |
| ...           |                  |
------------------------------------

   As one can see, certain phone numbers may map to the same page. This
   makes sense, but we need to consider one of our constraints: our
   Server only has room for 200 phone pages.
   
   What may happen is that we re-read the same phone page many times.
   This isn't a problem if the phone page is in memory. We have limited
   memory, however, and we may have to flush our memory to make room for
   other phone pages. So the re-reading may actually be a disk I/O.
   
   The Server needs to decide when it's best to do a table scan versus
   using the non-clustered index to satisfy mini-range type of queries.
   The way it decides this is by applying a heuristic based on the
   information maintained when an _update statistics_ is performed.
   
   In summary, non-clustered indexes work really well when used for
   highly selective queries and they may work for short, range type of
   queries.
   
Suggested Uses

   Having suffered many table corruption situations (with 150 SQL servers
   who wouldn't? :-)), I'd say _always_ have a clustered index. With a
   clustered index you can fish data out around the _bad_ spots on the
   table thus having minimal data loss.
   
   When you cluster, build the cluster to satisfy the largest percentage
   of range type queries. Don't put the clustered index on your primary
   key because typically primary keys are increasing linearly. What
   happens is that you end up inserting all new rows at the end of the
   table thus creating a hot spot on the last data page.
   
   For detail rows, create the clustered index on the commonly accessed
   foreign key. This will aid joins from the master to it.
   
   Use nonclustered index to aid queries where your selection is _very_
   selective. For example, primary keys. :-)
     _________________________________________________________________

                 Q3.4: OPTIMISTIC VERSUS PESSIMISTIC LOCKING?
                                       
   
     _________________________________________________________________
   
   This is the same problem another poster had ... basically locking a
   record to ensure that it hasn't changed underneath ya.
   
   fcasas@ix.netcom.com has a pretty nifty solution if you are using
   ct-lib (I'll include that below -- hope it's okay Francisco ... :-))
   ...
   
   Basically the problem you are facing is one of being a pessimist or an
   optimist.
   
   I contend that your business really needs to drive this.
   
   Most businesses (from my experience) can be optimistic.
   
   That is, if you are optimistic that the chances that someone is going
   to change something from underneath the end-user is _low_, then do
   _nothing_ about it.
   
   On the other hand, if you are pessimistic that someone may change
   something underneath the end-user, you can solve it at least as
   follows:
   
  Solution #1
  
   Use a timestamp on a header table that would be shared by the common
   data. This timestamp field is a Sybase datatype and has nothing to do
   with the current time. Do not attempt to do any operations on this
   column other than comparisons. What you do is when you grab data to
   present to the end-user, have the client software also grab the
   timestamp column value. After some _thing time_, if the end-user
   wishes to update the database, compare the client timestamp with
   what's in the database and it it's changed, then you can take
   appropriate action: again this is dictated by the business.
   
  Problem #1
  
   If users are sharing tables but columns are not shared, there's no way
   to detect this using timestamps because it's not sufficiently
   granular.
   
  Solution #2 (presented by fcasas)
  
   ... Also are you coding to ct-lib directly? If so there's something
   that you could have done, or may still be able to do if you are using
   cursors.
   
   With ct-lib there's a ct_describe function that lets you see _key
   data_. This allows you to implement optimistic locking with cursors
   and not need timestamps. Timestamps are nice, but they are changed
   when any column on a row changes, while the ct_describe mechanism
   detects changes at the columns level for a greater degree of
   granularity of the change. In other words, the timestamp granularity
   is at the row, while ct_describes CS_VERSION_KEY provides you with
   granularity at the column level.
   
   Unfortunately this is not well documented and you will have to look at
   the training guide and the manuals very closely.
   
   Further if you are using cursors _do not_ make use of the
   
     [for {read only | update [of column_name_list]}]
     
   of the _select_ statement. Omitting this clause will still get you
   data that can still be updated and still only place a shared lock on
   the page. If you use the _read only_ clause you are acquiring shared
   locks, but the cursor is not updatable. However, if you say
   
     update [of ...
     
   will place updated locks on the page, thus causing contention. So, if
   you are using cursors _don't use_ the above clause. So, could you
   answer the following three questions:
    1. Are you using optimistic locking?
    2. Are you coding to ct-lib?
    3. Are you using cursors?
       
  Problem #2
  
   You need to be coding with ct-lib ...
   
  Solution #3
  
   Do nothing and be optimistic. We do a lot of that in our shop and it's
   really not that big of a problem.
   
  Problem #3
  
   Users may clobber each other's changes ... then they'll come looking
   for you to clobber you! :-)
     _________________________________________________________________

                   Q3.5: HOW DO I FORCE AN INDEX TO BE USED?
                                       
   
     _________________________________________________________________
   
  Sybase 4.x and Sybase System 10
  
   All indexes have an ordinal value assigned to them. For example, the
   following query will return the ordinal value of all the indexes on
   _my_table_:
   

select name, indid from sysindexes where id = object_id("my_table")

   Assuming that we wanted to force the usuage of index numbered three:
   

select ... from my_table(3)

   note that using a value of zero is equivalent to a _table scan_.
   
     _You should heavily document any indexed that are forced. _
     
  System 11
  
   In System 11, the binding of the internal ordinal value is alleviated
   so that instead of using the ordinal index value, the index name can
   be used instead:

select ... from my_table (_index_ my_first_index)

   Note that you should _still_ document that the index is being forced.
   
     _________________________________________________________________

            Q3.6: WHY PLACE TEMPDB AND LOG ON LOW NUMBERED DEVICES?
                                       
   _System 10 and below._
     _________________________________________________________________
   
   In System 10 and Sybase 4.X, the I/O scheduler starts at logical
   device (_ldev_) zero and works up the _ldev_ list looking for
   outstanding I/O's to process. Taking this into consideration, the
   following device fragments (_disk init_) should be added before any
   others:
    1. tempdb
    2. log
       
   
     _________________________________________________________________
Q3.7: How much memory to configure?

System 10 and below.
----------------------------------------------------------------------------

Overview

At some point you'll wonder if your SQL Server has been configured with
sufficient memory. We hope that it's not during some crisis but that's
probably when it'll happen.

The most important thing in setting up memory for a SQL Server is that it
has to be large enough to accomodate:

   * concurrent user connections
   * active procedures
   * and concurrent open databases.

By not setting the SQL Server up correctly it will affect the performance of
it. A delicate balance needs to be struck where your SQL Server is large
enough to accommodate the users but not too large where it adversely affects
the CPU Server (such as causing swapping).

Assumptions made of the reader:

   * The reader has some experience administering SQL Servers.
   * All queries have been tuned and that there are no unnecessary table
     scans.

Preface

As the SQL Server starts up, it pre-allocates its structures to support the
configuration. The memory that remains after the pre-allocation phase is the
available cache.

The available cache is partitioned into two pieces:

  1. buffer cache - data pages to be sent to a user connection or flushed to
     disk.
  2. procedure cache - where query plans live.

The idea is to determine if the buffer cache and the procedure cache are of
adequate size. As a DBA you can use dbcc memusage to ascertain this.

The information provided from a dbcc memusage, daunting at first, but taken
in sections, is easy to understand and provides the DBA with the vital
information that is necessary to determine if more memory is required and
where it is required.

If the procedure cache is too small, user connections will get sporadic
701's:

     There is insufficient system memory to run this query.

If the buffer cache is too small, response time may be poor or spiky.

The following text describes how to interpret the output of dbcc memusage
and to correlate this back to the fundamental question:

     Does my SQL Server have enough memory?

Definitions

Before delving into the world of dbcc memusage some definitions to get us
through.

Buffer Cache (also referred to as the Data Cache)
     Area of memory where SQL Server stores the most recently used data
     pages and index pages in 2K page units. If SQL Server finds a data page
     or index page in the buffer cache, it doesn't need to perform a
     physical I/O (it is reported as a logical I/O). If a user connection
     selects data from a database, the SQL Server loads the 2K data page(s)
     here and then hands the information off to the user connection. If a
     user connection updates data, these pages are altered, and then they
     are flushed out to disk by the SQL Server.

          This is a bit simplistic but it'll do. Read on for more info
          though.

          The cache is maintained as a doubly linked list. The head of
          the list is where the most recently used pages are placed.
          Naturally towards the tail of the chain are the least
          recently used pages. If a page is requested and it is found
          on the chain, it is moved back to the front of the chain and
          the information is relayed, thus saving a physical I/O.

          But wait! this recycling is not done forever. When a
          checkpoint occurs any dirty pages are flushed. Also, the
          parameter cbufwashsize determines how many times a page
          containing data can be recycled before it has to be flushed
          out to disk. For OAM and index pages the following parameters
          apply coamtrips and cindextrips respectively.

Procedure Cache
     Area of memory where SQL Server stores the most recently used query
     plans of stored procedures and triggers. This procedure cache is also
     used by the Server when a procedure is being created and when a query
     is being compiled. Just like the buffer cache, if SQL Server finds a
     procedure or a compilation already in this cache, it doesn't need to
     read it from the disk.

     The size of procedure cache is determined by the percentage of
     remaining memory configured for this Server parameter after SQL Server
     memory needs are met.

Available Cache

When the SQL Server starts up it pre-allocates its data structures to
support the current configuration. For example, based on the number of user
connections, additional netmem, open databases and so forth the dataserver
pre-allocates how much memory it requires to support these configured items.

What remains after the pre-allocation is the available cache. The available
cache is divided into buffer cache and procedure cache. The sp_configure
"procedure cache" parameter determines the percentage breakdown. A value of
20 would read as follows:

     20% of the available cache is dedicated to the procedure cache and
     80% is dedicated to the buffer cache.

Your pal: dbcc memusage

dbcc memusage takes a snapshot of your SQL Server's current memory usage and
reports this vital information back to you. The information returned
provides information regarding the use of your procedure cache and how much
of the buffer cache you are currently using.

An important piece of information is the size of the largest query plan.
We'll talk about that more below.

It is best to run dbcc memusage after your SQL Server has reached a working
set. For example, at the end of the day or during lunch time.

     Running dbcc memusage will freeze the dataserver while it does its
     work. The more memory you have configured for the SQL Server the
     longer it'll take. Our experience is that for a SQL Server with
     300MB it'll take about four minutes to execute. During this time,
     nothing else will execute: no user queries, no sp_who's...

In order to run dbcc memusage you must have sa privileges. Here's a sample
execution for discussion purposes:

1> /* send the output to the screen instead of errorlog */
2> dbcc traceon(3604)
3> go
1> dbcc memusage
2> go
Memory Usage:

                            Meg.         2K Blks           Bytes

      Configured Memory:300.0000          153600        314572800

              Code size:  2.6375            1351         2765600
      Kernel Structures: 77.6262           39745        81396975
      Server Structures: 54.4032           27855        57045920
             Page Cache:129.5992           66355        135894640
           Proc Buffers:  1.1571             593         1213340
           Proc Headers: 25.0840           12843        26302464

Number of page buffers:    63856
Number of proc buffers:    15964

Buffer Cache, Top 20:

           DB Id         Object Id      Index Id        2K Buffers

               6         927446498             0            9424
               6         507969006             0            7799
               6         959446612             0            7563
               6         116351649             0            7428
               6        2135014687             5            2972
               6         607445358             0            2780
               6         507969006             2            2334
               6        2135014687             0            2047
               6         506589013             0            1766
               6        1022066847             0            1160
               6         116351649           255             987
               6         927446498             8             897
               6         927446498            10             733
               6         959446612             7             722
               6         506589013             1             687
               6         971918604             0             686
               6         116351649             6             387

Procedure Cache, Top 20:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
----
Database Id: 6
Object Id: 1668357178
Object Name: lp_cm_subcase_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 10
Size of plans: 0.202827 Mb, 212680.000000 bytes, 110 pages
----
Database Id: 6
Object Id: 132351706
Object Name: csp_get_case
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 9
Size of plans: 0.149792 Mb, 157068.000000 bytes, 81 pages
----
Database Id: 6
Object Id: 1858261845
Object Name: lp_get_last_caller_new
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.054710 Mb, 57368.000000 bytes, 30 pages
...

1> /* redirect output back to the errorlog */
2> dbcc traceoff(3604)
3> go

Dissecting memusage output

The output may appear overwhelming but it's actually pretty easy to parse.
Let's look at each section.

Memory Usage

This section provides a breakdown of the memory configured for the SQL
Server.

Memory Usage:

                            Meg.         2K Blks           Bytes

      Configured Memory:300.0000          153600        314572800

              Code size:  2.6375            1351         2765600
      Kernel Structures: 77.6262           39745        81396975
      Server Structures: 54.4032           27855        57045920
             Page Cache:129.5992           66355        135894640
           Proc Buffers:  1.1571             593         1213340
           Proc Headers: 25.0840           12843        26302464

Number of page buffers:    63856
Number of proc buffers:    15964

     The Configured Memory does not equal the sum of the individual
     components. It does in the sybooks example but in practice it
     doesn't always. This is not critical and it is simply being noted
     here.

The Kernel Structures and Server structures are of mild interest. They can
be used to cross-check that the pre-allocation is what you believe it to be.
The salient line items are Number of page buffers and Number of proc
buffers.

The Number of proc buffers translates directly to the number of 2K pages
available for the procedure cache.

The Number of page buffers is the number of 2K pages available for the
buffer cache.

As a side note and not trying to muddle things, these last two pieces of
information can also be obtained from the errorlog:

     ... Number of buffers in buffer cache: 63856.
     ... Number of proc buffers allocated: 15964.

In our example, we have 15,964 2K pages (~32MB) for the procedure cache and
63,856 2K pages (~126MB) for the buffer cache.

Buffer Cache

The buffer cache contains the data pages that the SQL Server will be either
flushing to disk or transmitting to a user connection.

If this area is too small, the SQL Server must flush 2K pages sooner than
might be necessary to satisfy a user connection's request.

For example, in most database applications there are small edit tables that
are used frequently by the application. These tables will populate the
buffer cache and normally will remain resident during the entire life of the
SQL Server. This is good because a user connection may request validation
and the SQL Server will find the data page(s) resident in memory. If however
there is insufficient memory configured, then these small tables will be
flushed out of the buffer cache in order to satisfy another query. The next
time a validation is requested, the tables will have to be re-read from disk
in order to satisfy the request. Your performance will degrade.

Memory access is easily an order of magnitude faster than performing a
physical I/O.

In this example we know from the previous section that we have 63,856 2K
pages (or buffers) available in the buffer cache. The question to answer is,
"do we have sufficient buffer cache configured?"

The following is the output of the dbcc memusage regarding the buffer cache:

Buffer Cache, Top 20:

           DB Id         Object Id      Index Id        2K Buffers

               6         927446498             0            9424
               6         507969006             0            7799
               6         959446612             0            7563
               6         116351649             0            7428
               6        2135014687             5            2972
               6         607445358             0            2780
               6         507969006             2            2334
               6        2135014687             0            2047
               6         506589013             0            1766
               6        1022066847             0            1160
               6         116351649           255             987
               6         927446498             8             897
               6         927446498            10             733
               6         959446612             7             722
               6         506589013             1             687
               6         971918604             0             686
               6         116351649             6             387

                                Index Legend

                        Value       Definition

                        0      Table data

                        1      Clustered index

                        2-250  Nonclustered indexes

                        255    Text pages

   * To translate the DB Id use select db_name(#) to map back to the
     database name.
   * To translate the Object Id, use the respective database and use the
     select object_name(#) command.

It's obvious that the first 10 items take up the largest portion of the
buffer cache. Sum these values and compare the result to the amount of
buffer cache configured.

Summing the 10 items nets a result of 45,263 2K data pages. Comparing that
to the number of pages configured, 63,856, we see that this SQL Server has
sufficient memory configured.

When do I need more Buffer Cache?

I follow the following rules of thumb to determine when I need more buffer
cache:

   * If the sum of all the entries reported is equal to the number of pages
     configurd and all entries are relatively the same size. Crank it up.
   * Note the natural groupings that occur in the example. If the difference
     between any of the groups is greater than an order of magnitude I'd be
     suspicious. But only if the sum of the larger groups is very close to
     the number of pages configured.

Procedure Cache

If the procedure cache is not of sufficient size you may get sporadic 701
errors:

     There is insufficient system memory to run this query.

In order to calculate the correct procedure cache one needs to apply the
following formula (found in SQL Server Troubleshooting Guide - Chapter 2,
Procedure Cache Sizing):

     proc cache size = max(# of concurrent users) * (size of the
     largest plan) * 1.25

          The flaw with the above formula is that if 10% of the
          users are executing the largest plan, then you'll
          overshoot. If you have distinct classes of connections
          whose largest plans are mutually exclusive then you need
          to account for that:

               ttl proc cache = proc cache size * x% + proc
               cache size * y% ...

The max(# of concurrent users) is not the number of user connections
configured but rather the actual number of connections during the peak
period.

To compute the size of the largest [query] plan take the results from the
dbcc memusage's, Procedure Cache section and apply the following formula:

     query plan size = [size of plans in bytes] / [number of plans]

We can compute the size of the query plan for lp_cm_case_list by using the
output of the dbcc memusage:

...
Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
----
...

Entering the respective numbers, the query plan size for lp_cm_case_list is
21K:

     query plan size = 339072 / 16
     query plan size = 21192 bytes or 21K

The formula would be applied to all objects found in the procedure cache and
the largest value would be plugged into the procedure cache size formula:

                              Query Plan Sizes

                                               Query
                                Object          Plan
                                                Size

                        lp_cm_case_list        21K

                        lp_cm_subcase_list     21K

                        csp_get_case           19K

                        lp_get_last_caller_new 28K

The size of the largest [query] plan is 28K.

Entering these values into the formula:

     proc cache size = max(# of concurrent users) * (size of the
     largest plan) * 1.25
     proc cache size = 491 connections * 28K * 1.25
     proc cache size = 17,185 2K pages required

Our example SQL Server has 15,964 2K pages configured but 17,185 2K pages
are required. This SQL Server can benefit by having more procedure cache
configured.

This can be done one of two ways:

  1. If you have some headroom in your buffer cache, then sp_configure
     "procedure cache" to increase the ratio of procedure cache to buffer
     cache or

          procedure cache =
               [ proposed procedure cache ] /
               ( [ current procedure cache ] + [ current buffer cache ]
               )

          The new procedure cache would be 22%:

               procedure cache = 17,185 / ( 15,964 + 63,856 )
               procedure cache = .2152 or 22%

  2. If the buffer cache cannot be shrunken, then sp_configure "memory" to
     increase the total memory:

          mem size =
               ([ proposed procedure cache ]) /
               ([ current procedure cache ] / [ current configured
               memory ])

          The new memory size would be 165,399 2K pages, assuming that
          the procedure cache is unchanged:

               mem size = 17,185 / ( 15,964 / 153,600 )
               mem size = 165,399 2K pages

----------------------------------------------------------------------------

                  Q3.8: WHY SHOULD I USE _STORED PROCEDURES_?
                                       
   
     _________________________________________________________________
   
   There are many advantages to using stored procedures (unfortunately
   they do not handle the _text/image_ types):
     * Security - you can revoke access to the base tables and only allow
       users to access and manipulate the data via the stored procedures.
     * Performance - stored procedures are parsed and a query plan is
       compiled. This information is stored in the system tables and it
       only has to be done once.
     * Network - if you have users who are on a WAN (slow connection)
       having stored procedues will improve throughput because less bytes
       need to flow down the wire from the client to the SQL server.
     * Tuning - if you have all your SQL code housed in the database,
       then it's easy to tune the stored procedure without affecting the
       clients (unless of course the parameters change).
     * Modularity - during application development, the application
       designer can concentrate on the front-end and the DB designer can
       concentrate on the SQL Server.
       
   
     _________________________________________________________________

                        Q3.9: YOU AND _SHOWPLAN_ OUTPUT
                                       
   
     _________________________________________________________________
   
   
   
   Microsoft SQL Server includes a very intelligent cost-based query
   optimizer which, given an ad-hoc query, can quickly determine the best
   access method for retrieving the data, including the order in which to
   join tables and whether or not to use indexes that may be on those
   tables. By using a cost-based query optimizer, the System
   Administrator or end user is released from having to determine the
   most efficient way of structuring the query to get optimal performance
   -- instead, the optimizer looks at all possible join orders, and the
   cost of using each index, and picks the plan with the least cost in
   terms of page I/O's.
   
   Detailed information on the final access method that the optimizer
   chooses can be displayed for the user by executing the Transact-SQL
   "SET SHOWPLAN ON" command. This command will show each step that the
   optimizer uses in joining tables and which, if any, indexes it chooses
   to be the least-cost method of accessing the data. This can be
   extremely beneficial when analyzing certain queries to determine if
   the indexes that have been defined on a table are actually being
   considered by the optimizer as useful in getting to the data. This
   document will define and explain each of the output messages from
   SHOWPLAN, and give example queries and the output from SHOWPLAN to
   illustrate the point. The format will be consistent throughout: a
   heading which corresponds to the exact text of a SHOWPLAN statement,
   followed by a description of what it means, a sample query which
   generates that particular message, and the full output from executing
   the query with the SHOWPLAN option on. Wherever possible, the queries
   will use the existing tables and indexes, unaltered, from the SQL
   Server "Pubs" sample database.
   
  STEP n
  
   This statement will be included in the SHOWPLAN output for every
   query, where n is an integer, beginning with "STEP 1". For some
   queries, SQL Server cannot effectively retrieve the results in a
   single step, and must break the query plan into several steps. For
   example, if a query includes a GROUP BY clause, the query will need to
   be broken into at least two steps: one step to select the qualifying
   rows from the table, and another step to group them. The following
   query demonstrates a singlestep query.

        Query:          SELECT au_lname, au_fname
                        FROM Authors
                        WHERE city = "Oakland"

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan

  The type of query is SELECT (into a worktable)
  
   This SHOWPLAN statement indicates that SQL Server needs to insert some
   of the query results into an intermediate worktable, and later in the
   query processing will then select the values out of that table. This
   is most often seen with a query which involves a GROUP BY clause, as
   the results are first put into a work table, and then the qualifying
   rows in the work table are grouped based on the given column in the
   GROUP BY clause. The following query returns a list of all cities and
   indicates the number of authors that live in each city. The query plan
   is composed of two steps: the first step selects the rows into a
   worktable, and the second step retrieves the grouped rows from the
   worktable:

        Query:          SELECT city, total_authors = count(*)
                        FROM Authors
                        GROUP BY city

        SHOWPLAN:       STEP 1
                        The type of query is SELECT (into a
                                                     worktable)
                        GROUP BY
                        Vector Aggregate
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        FROM TABLE
                        Worktable
                        Nested iteration
                        Table Scan

  The type of query is <query type>
  
   
   
   This statement describes the type of query for each step. For most
   user queries, the value for <query type> will be SELECT, INSERT,
   UPDATE, or DELETE. If SHOWPLAN is turned on while other commands are
   issued, the <query type> will reflect the command that was issued. The
   following examples show various outputs for different
   queries/commands:
   

        Query 1:        CREATE TABLE Mytab (col1 int)
        SHOWPLAN 1:     STEP 1
                        The type of query is TABCREATE

        Query 2:        INSERT Publishers
                        VALUES ("9904", "NewPubs", "Seattle", "WA")

        SHOWPLAN 2:     STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Table Scan
                        TO TABLE
                        publishers

   
   
  The update mode is deferred
  
   There are two methods or "modes" that SQL Server can use to perform
   update operations such as INSERT, DELETE, UPDATE, and SELECT INTO.
   These methods are called deferred update and direct update. When the
   deferred method is used, the changes are applied to all rows of the
   table by making log records in the transaction log to reflect the old
   and new value of the column(s) being modified (in the case of UPDATE
   operations), or the values which will be inserted or deleted (in the
   case of INSERT and DELETE, respectively). When all of the log records
   have been constructed, the changes are then applied to the data pages.
   This method generates more log records than a direct update (discussed
   later), but it has the advantage of allowing the execution of commands
   which may cascade changes throughout a table. For example, consider a
   table which has a column "col1" with a unique index on it, and data
   values numbered consecutively from 1 to 100 in that column. Assume an
   UPDATE statement is executed to increase the value in each row by 1:

        Query 1:        UPDATE Mytable
                        SET col1 = col1 + 1

        SHOWPLAN 1:     STEP 1
                        The type of query is UPDATE
                        The update mode is deferred
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Mytable

   Consider the consequences of starting at the first row in the table,
   and updating each row, through the end of the table. Updating the
   first row (which has an initial value of 1) to 2 would cause an error,
   as the unique index would be violated since there is already a value
   of 2 in the table; likewise, updating the second row (which has an
   initial value of 2) to 3 would also cause a unique key violation, as
   would all rows through the end of the table, except for the last row.
   By using deferred updates, this problem is easily avoided. The log
   records are first constructed to show what the new values for each row
   will be, the existing rows are deleted, and the new values inserted.
   
   Just as with UPDATE commands, INSERT commands may also be deferred for
   very similar reasons. Consider the following query (there is no
   clustered index or unique index on the "roysched" table):

        Query 2:        INSERT roysched SELECT * FROM roysched

        SHOWPLAN 2:     STEP 1
                        The type of query is INSERT
                        The update mode is deferred
                        FROM TABLE
                        roysched
                        Nested iteration
                        Table Scan
                        TO TABLE
                        roysched

   Since there is no clustered index on the table, the new rows will be
   added to the end of the table. The query processor needs to be able to
   differentiate between the existing rows that are currently in the
   table (prior to the INSERT command) and the rows which will be
   inserted, so as to not get into a continuous loop of selecting a row,
   inserting it at the end of the table, selecting that row that it just
   inserted, and re-inserting it again. By using the deferred method of
   inserting, the log records can be first be constructed to show all of
   the currently-existing values in the table, then SQL Server will
   re-read those log records to insert them into the table.
   
  The update mode is direct
  
   Whenever possible, SQL Server will attempt to use the direct method of
   applying updates to tables, since it is faster and requires fewer log
   records to be generated than the deferred method. Depending on the
   type of command, one or more criteria must be met in order for SQL
   Server to perform the update using the direct method. Those criteria
   are:
     * INSERT: For the direct update method to be used for INSERT
       operations, the table into which the rows are being inserted
       cannot be a table which is being read from in the same command.
       The second query example in the previous section demonstrates
       this, where the rows are being inserted into the same table in
       which they are being selected from. In addition, if rows are being
       inserted into the target table, and one or more of the target
       table's columns appear in the WHERE clause of the query then the
       deferred method, rather than the direct method, will be used.
     * SELECT INTO: When a table is being populated with data by means of
       a SELECT INTO command, the direct method will always be used to
       insert the new rows.
     * DELETE: For the direct update method to be used for DELETE
       operations, the query optimizer must be able to determine that
       either 0 or 1 rows qualify for the delete. The only means for it
       to verify this is to check that there is a unique index on the
       table, which is qualified in the WHERE clause of the DELETE
       command, and the target table is not joined with any other
       table(s).
     * UPDATE: For the direct update method to be used for UPDATE
       operations, the same criteria apply as for DELETE: a unique index
       must exist such that the query optimizer can determine that no
       more than 1 row qualifies for the update, and the only table in
       the UPDATE command is the target table to update. In addition, all
       columns that are being updated must be datatypes that are
       fixedlength, rather than variable-length. Note that any column
       that allows NULLs is internally stored by SQL Server as a
       variable-length datatype column.

        Query 1:        DELETE
                        FROM authors
                        WHERE au_id = "172-32-1176"

        SHOWPLAN 1:     STEP 1
                        The type of query is DELETE
                        The update mode is direct
                        FROM TABLE
                        authors
                        Nested iteration
                        Using Clustered Index
                        TO TABLE
                        authors

        Query 2:        UPDATE titles
                        SET type = "popular_comp"
                        WHERE title_id = "BU2075"

        SHOWPLAN 2:     STEP 1
                        The type of query is UPDATE
                        The update mode is direct
                        FROM TABLE
                        titles
                        Nested iteration
                        Using Clustered Index
                        TO TABLE
                        titles

        Query 3:        UPDATE titles
                        SET price = $5.99
                        WHERE title_id = "BU2075"

        SHOWPLAN 3:     STEP 1
                        The type of query is UPDATE
                        The update mode is deferred
                        FROM TABLE
                        titles
                        Nested iteration
                        Using Clustered Index
                        TO TABLE
                        titles

   Note that the only difference between the second and third example
   queries is the column of the table which is being updated. In the
   second query, the direct update method is used, whereas in the third
   query, the deferred method is used. This difference is due to the
   datatype of the column being updated: the titles.type column is
   defined as "char(12) NOT NULL", while the titles.price column is
   defined as "money NULL". Since the titles.price column is not a
   fixed-length datatype, the direct method cannot be used.
   
  GROUP BY
  
   This statement appears in the SHOWPLAN output for any query that
   contains a GROUP BY clause. Queries that contain a GROUP BY clause
   will always be at least two-step queries: one step to select the
   qualifying rows into a worktable and group them, and another step to
   return the rows from the worktable. The following example illustrates
   this:

        Query:          SELECT type, AVG(advance),
                               SUM(ytd_sales)
                        FROM titles
                        GROUP BY type

        SHOWPLAN:       STEP 1
                        The type of query is SELECT (into a
                                                     worktable)
                        GROUP BY
                        Vector Aggregate
                        FROM TABLE
                        titles
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        FROM TABLE
                        Worktable
                        Nested iteration
                        Table Scan

  Scalar Aggregate
  
   Transact-SQL includes the aggregate functions:
     * AVG()
     * COUNT()
     * COUNT(*)
     * MAX()
     * MIN()
     * SUM()
       
   Whenever an aggregate function is used in a SELECT statement that does
   not include a GROUP BY clause, it produces a single value, regardless
   of whether it is operating on all of the rows in a table or on a
   subset of the rows defined by a WHERE clause. When an aggregate
   function produces a single value, the function is called a "scalar
   aggregate", and is listed as such by SHOWPLAN. The following example
   shows the use of scalar aggregate functions:

        Query:          SELECT AVG(advance), SUM(ytd_sales)
                        FROM titles
                        WHERE type = "business"

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        Scalar Aggregate
                        FROM TABLE
                        titles
                        Nested iteration
                        Table Scan

                        STEP 2
                        The type of query is SELECT
                        Table Scan

   Notice that SHOWPLAN considers this a two-step query, which is very
   similar to the SHOWPLAN from the GROUP BY query listed earlier. Since
   the query contains a scalar aggregate, which will return a single
   value, SQL Server keeps internally a "variable" to store the result of
   the aggregate function. It can be thought of as a temporary storage
   space to keep a running total of the aggregate function as the
   qualifying rows from the table are evaluated. After all rows have been
   evaluated from the table (Step 1), the final value from the "variable"
   is then selected (Step 2) to return the scalar aggregate result.
   
  Vector Aggregate
  
   When a GROUP BY clause is used in a query which also includes an
   aggregate function, the aggregate function produces a value for each
   group. These values are called "vector aggregates". The "Vector
   Aggregate" statement from SHOWPLAN indicates that the query includes a
   vector aggregate. Below is an example query and SHOWPLAN which
   includes a vector aggregate:

        Query:          SELECT title_id, AVG(qty)
                        FROM sales
                        GROUP BY title_id

        SHOWPLAN:       STEP 1
                        The type of query is SELECT (into a
                                                     worktable)
                        GROUP BY
                        Vector Aggregate
                        FROM TABLE
                        sales
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        FROM TABLE
                        Worktable
                        Nested iteration
                        Table Scan

  FROM TABLE
  
   This SHOWPLAN step indicates the table that the query is reading from.
   In most queries, the "FROM TABLE" will be followed on the next line by
   the name of the table which is being selected from. In other cases, it
   may indicate that it is selecting from a worktable (discussed later).
   The main importance of examining the table names after the "FROM
   TABLE" output is to determine the order in which the query optimizer
   is joining the tables. The order of the tables listed after the "FROM
   TABLE" statements in the SHOWPLAN output indicate the same order that
   the tables were joined; this order may be (and often times is)
   different than the order that they are listed in the FROM clause of
   the query, or the order that they appear in the WHERE clause of the
   query. This is because the query optimizer examines all different join
   orders for the tables involved, and picks the join order that will
   require the least amount of I/O's.

        Query:          SELECT authors.au_id, au_fname, au_lname
                        FROM authors, titleauthor, titles
                        WHERE authors.au_id = titleauthor.au_id
                        AND titleauthor.title_id = titles.title_id
                        AND titles.type = "psychology"

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        titles
                        Nested iteration
                        Table Scan
                        FROM TABLE
                        titleauthor
                        Nested iteration
                        Table Scan
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan

   This query illustrates the order in which the SQL Server query
   optimizer chooses to join the tables, which is not the order that they
   were listed in the FROM clause or the WHERE clause. By examining the
   order of the "FROM TABLE" statements, it can be seen that the
   qualifying rows from the titles table are first located (using the
   search clause <titles.type = "psychology">). Those rows are then
   joined with the titleauthor table (using the join clause
   <titleauthor.title_id = titles.title_id>), and finally the titleauthor
   table is joined with the authors table to retrieve the desired columns
   (using the join clause <authors.au_id = titleauthor.au_id>).
   
  TO TABLE
  
   When a command is issued which makes or attempts to make a
   modification to one or more rows of a table, such as INSERT, DELETE,
   UPDATE, or SELECT INTO, the "TO TABLE" statement will show the target
   table which is being modified. For some operations which require an
   intermediate step which inserts rows into a worktable (discussed
   later), the "TO TABLE" will indicate that the results are going to the
   "Worktable" table, rather than a user table. The following examples
   illustrate the use of the "TO TABLE" statement:

        Query 1:        INSERT sales
                        VALUES ("8042", "QA973", "7/15/92", 7,
                                "Net 30", "PC1035")

        SHOWPLAN 1:     STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Table Scan
                        TO TABLE
                        sales

        Query 2:        UPDATE publishers
                        SET city = "Los Angeles"
                        WHERE pub_id = "1389"

        SHOWPLAN 2:     STEP 1
                        The type of query is UPDATE
                        The update mode is deferred
                        FROM TABLE
                        publishers
                        Nested iteration
                        Using Clustered Index
                        TO TABLE
                        publishers

   Notice that the SHOWPLAN for the second query indicates that the
   publishers table is used both as the "FROM TABLE" as well as the "TO
   TABLE". In the case of UPDATE operations, the optimizer needs to read
   the table which contains the row(s) to be updated, resulting in the
   "FROM TABLE" statement, and then needs to modify the row(s), resulting
   in the "TO TABLE" statement.
   
  Worktable
  
   For some types of queries, such as those that require the results to
   be ordered or displayed in groups, the SQL Server query optimizer may
   determine that it is necessary to create its own temporary worktable.
   The worktable is used to hold the intermediate results of the query,
   at which time the result rows can be ordered or grouped, and then the
   final results selected from that worktable. When all results have been
   returned, the worktable is automatically dropped. The worktables are
   always created in the Tempdb database, so it is possible that the
   system administrator may have to increase the size of Tempdb to
   accomodate that queries which require very large worktables. Since the
   query optimizer creates these worktables for its own internal use, the
   names of the worktables will not be listed in the tempdb..sysobjects
   table.
   
   Worktables will always need to be used when a query contains a GROUP
   BY clause. For queries involving ORDER BY, it is possible that the
   ordering can be done without the use of the worktable. If there is a
   clustered index on the column(s) in the ORDER BY clause, the optimizer
   knows that the rows are already stored in sorted order, so a sort in a
   worktable is not necessary (although there are exceptions to this,
   depending on the sort order which is installed on the server). Since
   the data is not stored in sorted order for nonclustered indexes, the
   worktable will not be necessary if the cheapest access plan is by
   using the nonclustered index. However, if the optimizer determines
   that scanning the entire table will require fewer I/Os than using the
   nonclustered index, then a worktable will need to be created for the
   ordering of the results. The following examples illustrate the use of
   worktables:

        Query 1:        SELECT type, AVG(advance), SUM(ytd_sales)
                        FROM titles
                        GROUP BY type

        SHOWPLAN 1:     STEP 1
                        The type of query is SELECT (into a
                                                     worktable)
                        GROUP BY
                        Vector Aggregate
                        FROM TABLE
                        titles
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        FROM TABLE
                        Worktable
                        Nested iteration
                        Table Scan

        Query 2:        SELECT *
                        FROM authors
                        ORDER BY au_lname, au_fname

        SHOWPLAN 2:     STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for ORDER BY
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        This step involves sorting
                        FROM TABLE
                        Worktable
                        Using GETSORTED
                        Table Scan

        Query 3:        SELECT *
                        FROM authors
                        ORDER BY au_id

        SHOWPLAN 3:     STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan

   In the third example above, notice that no worktable was created for
   the ORDER BY clause. This is because there is a unique clustered index
   on the authors.au_id column, so the data is already stored in sorted
   order based on the au_id value, and an additional sort for the ORDER
   BY is not necessary. In the second example, there is a composite
   nonclustered index on the columns au_lname and au_fname. However,
   since the optimizer chose not to use the index, and due to the sort
   order on the SQL Server, a worktable needed to be created to
   accomodate the sort.
   
  Worktable created for SELECT_INTO
  
   SQL Server's SELECT INTO operation performs two functions: it first
   creates a table with the exact same structure as the table being
   selected from, and then it insert all rows which meet the WHERE
   conditions (if a WHERE clause is used) of the table being selected
   from. The "Worktable created for SELECT_INTO" statement is slightly
   misleading, in that the "worktable" that it refers to is actually the
   new physical table that is created. Unlike other worktables, it is not
   dropped when the query finishes executing. In addition, the worktable
   is not created in Tempdb, unless the user specifies Tempdb as the
   target database for the new table.

        Query:          SELECT *
                        INTO seattle_stores
                        FROM stores
                        WHERE city = "seattle"

        SHOWPLAN:       STEP 1
                        The type of query is TABCREATE

                        STEP 2
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for SELECT_INTO
                        FROM TABLE
                        stores
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

  Worktable created for DISTINCT
  
   When a query is issued which includes the DISTINCT keyword, all
   duplicate rows are excluded from the results so that only unique rows
   are returned. To accomplish this, SQL Server first creates a worktable
   to store all of the results of the query, including duplicates, just
   as though the DISTINCT keyword was not included. It then sorts the
   rows in the worktable, and is able to easily discard the duplicate
   rows. Finally, the rows from the worktable are returned, which insures
   that no duplicate rows will appear in the output.

        Query:          SELECT DISTINCT city
                        FROM authors

        SHOWPLAN:       STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for DISTINCT
                        FROM TABLE
                        authors
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        This step involves sorting
                        FROM TABLE
                        Worktable
                        Using GETSORTED
                        Table Scan

  Worktable created for ORDER BY
  
   As discussed previously, queries which include an ORDER BY clause will
   often require the use of a temporary worktable. When the optimizer
   cannot use an available index for the ordering, it creates a worktable
   for use in sorting the result rows prior to returning them. Below is
   an example which shows the worktable being created for the ORDER BY
   clause:

        Query:          SELECT *
                        FROM authors
                        ORDER BY city

        SHOWPLAN:       STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for ORDER BY
                        FROM TABLE
                        authors
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        This step involves sorting
                        FROM TABLE
                        Worktable
                        Using GETSORTED
                        Table Scan

  Worktable created for REFORMATTING
  
   When joining tables, SQL Server may in some cases choose to use a
   "reformatting strategy" to join the tables and return the qualifying
   rows. This strategy is only considered as a last resort, when the
   tables are large and neither table in the join has a useful index to
   use. The reformatting strategy inserts the rows from the smaller of
   the two tables into a worktable. Then, a clustered index is created on
   the worktable, and the clustered index is then used in the join to
   retrieve the qualifying rows from each table. The main cost in using
   the reformatting strategy is the time and I/Os necessary to build the
   clustered index on the worktable; however, that cost is still cheaper
   than joining the tables with no index. If user queries are using the
   reformatting strategy, it is generally a good idea to examine the
   tables involved and create indexes on the columns of the tables which
   are being joined. The following example illustrates the reformatting
   strategy. Since none of the tables in the Pubs database are large
   enough for the optimizer to consider using this strategy, two new
   tables are used. Each table has 5 columns defined as "char(200)". Tab1
   has 500 rows and Tab2 has 250 rows.

        Query:          SELECT Tab1.col1
                        FROM Tab1, Tab2
                        WHERE Tab1.col1 = Tab2.col1

        SHOWPLAN:       STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for REFORMATTING
                        FROM TABLE
                        Tab2
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        FROM TABLE
                        Tab1
                        Nested iteration
                        Table Scan
                        FROM TABLE
                        Worktable
                        Nested iteration
                        Using Clustered Index

  This step involves sorting
  
   This SHOWPLAN statement indicates that the query must sort the
   intermediate results before returning them to the user. Queries that
   specify DISTINCT will require an intermediate sort, as well as queries
   that have an ORDER BY clause which cannot use an available index. As
   stated earlier, the results are put into a worktable, and the
   worktable is then sorted. The example on the following page
   demontrates a query which requires a sort:

        Query:          SELECT DISTINCT state
                        FROM stores

        SHOWPLAN:       STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for DISTINCT
                        FROM TABLE
                        stores
                        FROM TABLE
                        stores
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable

                        STEP 2
                        The type of query is SELECT
                        This step involves sorting
                        FROM TABLE
                        Worktable
                        Using GETSORTED
                        Table Scan

  Using GETSORTED
  
   This statement indicates one of the ways in which the result rows can
   be returned from a table. In the case of "Using GETSORTED", the rows
   will be returned in sorted order. However, not all queries which
   return rows in sorted order will have this step. In the case of a
   query which has an ORDER BY clause, and an index with the proper sort
   sequence exists on those columns being ordered, an intermediate sort
   may not be necessary, and the rows can simply be returned in order by
   using the available index. The "Using GETSORTED" method is used when
   SQL Server must first create a temporary worktable to sort the result
   rows, and then return them in the proper sorted order. The following
   example shows a query which requires a worktable to be created and the
   rows returned in sorted order:

        Query:          SELECT au_id, au_lname, au_fname, city
                        FROM authors
                        ORDER BY city

        SHOWPLAN:       STEP 1
                        The type of query is INSERT
                        The update mode is direct
                        Worktable created for ORDER BY
                        FROM TABLE
                        authors
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan
                        TO TABLE
                        Worktable
                        STEP 2
                        The type of query is SELECT
                        This step involves sorting
                        FROM TABLE
                        Worktable
                        Using GETSORTED
                        Table Scan

  Nested iteration
  
   The "Nested iteration" is the default technique used to join tables
   and/or return rows from a table. It simply indicates that the
   optimizer is using one or more sets of loops to go through a table and
   retrieve a row, qualify the row based on the search criteria given in
   the WHERE clause, return the row to the front-end, and loop again to
   get the next row. The method in which it gets the rows (such as using
   an available index) is discussed later. The following example shows
   the optimizer doing nested iterations through each of the tables in
   the join:

        Query:          SELECT title_id, title
                        FROM titles, publishers
                        WHERE titles.pub_id = publishers.pub_id
                        AND publishers.pub_id = '1389'

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        publishers
                        Nested iteration
                        Using Clustered Index
                        FROM TABLE
                        titles
                        Nested iteration
                        Table Scan

  EXISTS TABLE : nested iteration
  
   This SHOWPLAN step is very similar to the previous one of "Nested
   iteration". The difference, however, is that this step indicates a
   nested iteration on a table which is part of an existence test in a
   query. There are several ways an existence test can be written in
   Transact-SQL, such as "EXISTS", "IN", or "=ANY". Prior to SQL Server
   version 4.2, queries which contained an IN clause followed by a
   subquery were treated as table joins. Beginning with version 4.2,
   these queries are now treated the same as if they were written with an
   EXISTS clause. The following examples demonstrate the SHOWPLAN output
   with queries which test for existence of values:

        Query 1:        SELECT au_lname, au_fname
                        FROM authors
                        WHERE EXISTS
                                (SELECT *
                                FROM publishers
                                WHERE authors.city = publishers.city)

        SHOWPLAN 1:     STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan
                        FROM TABLE
                        publishers
                        EXISTS TABLE : nested iteration
                        Table Scan

        Query 2:        SELECT title
                        FROM titles
                        WHERE pub_id IN
                                (SELECT pub_id
                                FROM publishers
                                WHERE city LIKE "B%")

        SHOWPLAN 2:     STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        titles
                        Nested iteration
                        Table Scan
                        FROM TABLE
                        publishers
                        EXISTS TABLE : nested iteration
                        Table Scan

  Table Scan
  
   This SHOWPLAN statement indicates which method was used to retrieve
   the physical result rows from the given table. When the "table scan"
   method is used, the execution begins with the first row in the table;
   each row is then retrieved and compared with the conditions in the
   WHERE clause, and returned to the front-end if it meets the given
   criteria. Regardless of how many rows qualify, every row in the table
   must be looked at, so for very large tables, a table scan can be very
   costly in terms of page I/Os. If a table has one or more indexes on
   it, the query optimizer may still choose to do a table scan instead of
   using one of the available indexes if the optimizer determines that
   the indexes are too costly or are not useful for the given query. The
   following query shows a typical table scan:

        Query:          SELECT au_lname, au_fname
                        FROM authors

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        authors
                        Nested iteration
                        Table Scan

  Using Clustered Index
  
   This SHOWPLAN statement indicates that the query optimizer chose to
   use the clustered index on a table to retrieve the rows. Unlike a
   table scan, using an index to retrieve rows does not require the
   optimizer to examine every row in the table (unless the WHERE clause
   applies to all rows). For queries which return a small percentage of
   the rows from a large table, the savings in terms of I/Os of using an
   index versus doing a table scan can be very significant. The following
   query shows the clustered index being used to retrieve the rows from
   the table:

        Query:          SELECT title_id, title
                        FROM titles
                        WHERE title_id LIKE "PS2%"

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        titles
                        Nested iteration
                        Using Clustered Index

  Index : <index name>
  
   Like the previous statement with the clustered index, this statement
   indicates that the optimizer chose to use an index to retrieve the
   rows instead of doing a table scan. The <index namethat follows the
   "Index :" label will always be the name of a nonclustered index on the
   table. Remember that each table can have no more than one clustered
   index, but can have up to 249 nonclustered indexes. The following
   query illustrates the use of a nonclustered index to find and return
   rows. This query uses the sysobjects table in the master database as
   an example, rather than a table in Pubs, since using a nonclustered
   index on the Pubs tables is generally more costly in terms of I/O than
   a straight table scan, due to the fact that most of the tables are
   only 1 page in size.

        Query:          SELECT *
                        FROM master..sysobjects
                        WHERE name = "mytable"
                        AND uid = 5

        SHOWPLAN:       STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        master..sysobjects
                        Nested iteration
                        Index : ncsysobjects

  Using Dynamic Index
  
   This SHOWPLAN statement indicates that the query optimizer has chosen
   to build its own index during the execution of the query, for use in
   its "OR strategy". Since queries involving OR clauses are generally
   not very efficient in terms of being able to quickly access the data,
   the SQL Server optimizer may choose to use the OR strategy. When the
   OR strategy is used, the optimizer makes several passes through the
   table -- one pass for each argument to each OR clause. The results of
   each pass are added to a single worktable, and the worktable is then
   sorted to remove any duplicate rows. The worktable does not contain
   the actual data rows from the table, but rather it contains the row
   IDs for the matching rows. The row IDs are simply a combination of the
   page number and row number on that page for each of the rows. When the
   duplicates have been eliminated, the optimizer considers the worktable
   of row IDs to be, essentially, its own index ("Dynamic Index")
   pointing to the table's data rows. It can then simply scan through the
   worktable, get each row ID, and return the data row from the table
   that has that row ID.
   
   The OR strategy is not limited only to queries that contain OR
   clauses. When an IN clause is used to list a group of possible values,
   SQL Server interprets that the same way as though the query had a
   separate equality clause for each of the values in the IN clause. To
   illustrate the OR strategy and the use of the Dynamic Index, the
   queries will be based on a table with 10,000 unique data rows, a
   unique nonclustered index on column "col1", and a unique nonclustered
   index on column "col2".

        Query 1:        SELECT *
                        FROM Mytable
                        WHERE col1 = 355
                        OR col2 = 732

        SHOWPLAN 1:     STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Index : col1_idx
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Index : col2_idx
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Using Dynamic Index

        Query 2:        SELECT *
                        FROM Mytable
                        WHERE col1 IN (700, 1503, 311)

        SHOWPLAN 2:     STEP 1
                        The type of query is SELECT
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Index : col1_idx
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Index : col1_idx
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Index : col1_idx
                        FROM TABLE
                        Mytable
                        Nested iteration
                        Using Dynamic Index

   SQL Server does not always resort to using the OR strategy for every
   query that contains OR clauses. The following conditions must be met
   before it will choose to use the OR strategy:
     * All columns in the OR clause must belong to the same table.
     * If any portion of the OR clause requires a table scan (due to lack
       of index or poor selectivity of a given index), then a table scan
       will be used for the entire query, rather than the OR strategy.
     * The decision to use the OR strategy is made after all indexes and
       costs are evaluated. If any other access plan is less costly (in
       terms of page I/Os), SQL Server will choose to use the plan with
       the least cost. In the examples above, if a straight table scan
       would result in less page I/Os than using the OR strategy, then
       the queries would be processed as a table scan instead of using
       the Dynamic Index.
       
   
     _________________________________________________________________

                          Q3.10: POOR MAN'S SP_SYSMON
                                       
   
     _________________________________________________________________
   
   Fine tune the _waitfor_ for your application. You may need _TS Role_
   -- see Q1.22.
   

use master
go
dbcc traceon(3604)
dbcc monitor ("clear", "all", "on")
waitfor delay "00:01:00"
dbcc monitor ("sample", "all", "on")
dbcc monitor ("select", "all", "on")
dbcc traceon(8399)
select field_name, group_name, value from sysmonitors
dbcc traceoff(8399)
go
dbcc traceoff(3604)
go

   
     _________________________________________________________________

                   Q3.11: VIEW MRU-LRU PROCEDURE CACHE CHAIN
                                       
   
     _________________________________________________________________
   
   _dbcc procbuf_ gives a listing of the current contents of the
   procedure cache. By repeating the process at intervals it is possible
   to watch procedures moving dowm the MRU-LRU chain, and so to see how
   long procedures remain in cache. The neat thing about this approach is
   that you can size your cache according to what is actually happening,
   rather than relying on estimates based on assumptions that may not
   hold on your site.
   
   To run it:
   

   dbcc traceon(3604)
   go
   dbcc procbuf
   go

   If you use sqsh it's a bit easier to grok the output:
   

   dbcc traceon(3604);
   dbcc procbuf;|fgrep pbname

   
     _________________________________________________________________
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@corp.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]

----------------------------------------------------------------------

Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!swrinde!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo
From: pablo@sgi.com (Pablo Sanchez)
Newsgroups: comp.databases.sybase,comp.answers,news.answers
Subject: Sybase FAQ: 5/8 - section 4
Supersedes: <FAQ.section_4_833665144@sgi.com>
Followup-To: comp.databases.sybase
Date: 1 Jul 1996 14:30:45 GMT
Organization: Silicon Graphics, Inc. Nederland, CO. USA
Lines: 2559
Approved: news-answers-request@MIT.EDU
Message-ID: <FAQ.section_4_836231368@sgi.com>
References: <FAQ.section_3_836231368@sgi.com>
Reply-To: pablo@sgi.com
NNTP-Posting-Host: mew.corp.sgi.com
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly
Originator: pablo@mew.corp.sgi.com
Xref: news1.ucsd.edu comp.databases.sybase:29421 comp.answers:15544 news.answers:62012

Archive-name: databases/sybase-faq/part5
URL: http://reality.sgi.com/pablo/Sybase_FAQ



                              Q4.1: SP_FREEDEVICE
                                       
   
     _________________________________________________________________
   

use master
go

drop proc sp_freedevice
go

create proc sp_freedevice
@devname        char(30) = null

as begin

  declare @showdev bit
  declare @alloc int
  if @devname = null
        select @devname = "%"
        , @showdev = 0
  else
        select @showdev = 1

  select @alloc = low
  from  master.dbo.spt_values
  where type = "E"
  and   number = 1

  create table #freedev
        (name   char(30),
         size   float,
         used   float)

  insert #freedev
        select dev.name,
                ((dev.high - dev.low) * @alloc + 500000) / 1048576,
                sum((usg.size * @alloc + 500000) / 1048576)
        from master.dbo.sysdevices dev, master.dbo.sysusages usg
        where dev.low <= usg.size + usg.vstart - 1
        and   dev.high >= usg.size + usg.vstart - 1
        and   dev.cntrltype = 0
        group by dev.name

  insert #freedev
        select name,
                ((high - low) * @alloc + 500000) / 1048576,
                0
        from    master.dbo.sysdevices
        where   cntrltype = 0
        and not exists (select * from #freedev
                        where name = master.dbo.sysdevices.name)

  if @showdev = 1 begin
    select
        devname = dev.name,
        size = convert(varchar(10),f.size) + " MB",
        used = convert(varchar(10),f.used) + " MB",
        free = convert(varchar(10),f.size - f.used) + " MB"
    from master.dbo.sysdevices dev, #freedev f
    where dev.name = f.name
    and   dev.name like @devname

    select
        dbase = db.name,
        size  = convert(varchar(10),((usg.size * @alloc)
        + 500000) / 1048576) + " MB",
        usage = vl.name
    from master.dbo.sysdatabases db,
        master.dbo.sysusages usg,
        master.dbo.sysdevices dev,
        master.dbo.spt_values vl
    where db.dbid = usg.dbid
    and   usg.segmap = vl.number
    and   dev.low <= usg.size + usg.vstart - 1
    and   dev.high >= usg.size + usg.vstart - 1
    and   dev.status & 2 = 2
    and   vl.type = "S"
    and   dev.name = @devname
  end
  else
    begin
        select total = convert(varchar(10), sum(size)) + " MB",
                used = convert(varchar(10), sum(used)) + " MB",
                free = convert(varchar(10), sum(size) - sum(used)) + " MB"
        from #freedev

        select devname = dev.name,
                size   = convert(varchar(10),f.size) + " MB",
                used   = convert(varchar(10),f.used) + " MB",
                free   = convert(varchar(10),f.size - f.used) + " MB"
        from    master.dbo.sysdevices dev, #freedev f
        where   dev.name = f.name
    end
  end
go

grant execute on sp_freedevice to public
go

   
     _________________________________________________________________

                                Q4.2: SP_WHODO
                                       
   
     _________________________________________________________________
   
Sybase System 10.x


use master
go

drop procedure sp_whodo
go

create procedure sp_whodo @loginame varchar(30) = NULL as

declare @low int
declare @high int
declare @spidlow int
declare @spidhigh int

select @low = 0, @high = 32767, @spidlow = 0, @spidhigh = 32767

if @loginame is not NULL
begin
        select @low = suser_id(@loginame), @high = suser_id(@loginame)
        if @low is NULL
        begin
                if @loginame like "[0-9]%"
                begin
                        select @spidlow = convert(int, @loginame),
                             @spidhigh = convert(int, @loginame),
                             @low = 0, @high = 32767
                end
                else
                begin
                        print "No login exists with the supplied name."
                        return (1)
                end
        end
end
select
   spid,
   status,
   substring(suser_name(suid),1,12)     loginame,
   hostname,
   convert(char(3),blocked)             blk,
   convert(char(7),isnull(time_blocked, 0)) blk_sec,
   convert(char(16),program_name)       program,
   convert(char(7),db_name(dbid))       dbname,
   convert(char(16),cmd)                cmd,
   convert(char(6),cpu)                 cpu,
   convert(char(7),physical_io)         io,
   convert(char(16),isnull(tran_name, ""))           tran_name
from master..sysprocesses
where suid >= @low and suid <= @high
and spid >= @spidlow and spid <= @spidhigh

return (0)
go

grant execute on sp_whodo to public
go

Sybase 4.x


use master
go

drop procedure sp_whodo
go

create procedure sp_whodo @loginame varchar(30) = NULL as

declare @low int
declare @high int
declare @spidlow int
declare @spidhigh int

select @low = 0, @high = 32767, @spidlow = 0, @spidhigh = 32767

if @loginame is not NULL
begin
        select @low = suser_id(@loginame), @high = suser_id(@loginame)
        if @low is NULL
        begin
                if @loginame like "[0-9]%"
                begin
                        select @spidlow = convert(int, @loginame),
                             @spidhigh = convert(int, @loginame),
                             @low = 0, @high = 32767
                end
                else
                begin
                        print "No login exists with the supplied name."
                        return (1)
                end
        end
end
select
   spid,
   status,
   substring(suser_name(suid),1,12)     loginame,
   hostname,
   convert(char(3),blocked)             blk,
   convert(char(16),program_name)       program,
   convert(char(7),db_name(dbid))       dbname,
   convert(char(16),cmd)                cmd,
   convert(char(6),cpu)                 cpu,
   convert(char(7),physical_io)         io
from master..sysprocesses
where suid >= @low and suid <= @high
and spid >= @spidlow and spid <= @spidhigh

return (0)
go

grant execute on sp_whodo to public
go

   
     _________________________________________________________________

                 Q4.3: GENERATING DUMP/LOAD DATABASE COMMAND.
                                       
   
     _________________________________________________________________
   

#!/bin/sh

#
# This script calls the function gen_dumpload_command to generate
# either a dump or a load command.
#
# This function works for both System 10 and Sybase 4.x
# installations.  You simply need to change your method of thinking.
# In Sybase 4.x, we only had a single stripe.  In System 10, most
# of the time we define a single stripe but in our bigger databases
# we define more stripes.
#
# Therefore, everything is a stripe.  Whether we use one stripe or
# many... cool?  Right on!
#
#
# The function gen_dumpload_command assumes that all dump devices
# adhere to the following naming convention:
#
#     stripe_NN_database
#
# NOTE:  If your shop is different search for "stripe" and replace
#        with your shop's value.
#
#


# gen_dumpload_command():
#
#        purpose:  to generate a dump/load to/from command based on
#                  what is defined in sysdevices.  The environment
#                  variable D_DEV is set.
#
#        return:   zero on success, non-zero on failure.
#
#        sets var: D_DEV is set with the actual dump/load command;
#                  stripe devices are also handled.
#
#        calls:    *none*
#
#        parms:    1 = DSQUERY
#                  2 = PASSWD
#                  3 = DB
#                  4 = CMD -> "dump" or "load"
#


gen_dumpload_command()
{
   LOCAL_DSQUERY=$1
   LOCAL_PASSWD=$2
   DB_TO_AFFECT=$3
   CMD=$4 # dump/load

   if [ "$CMD" = "dump" ] ; then
      VIA="to"
   else
      VIA="from"
   fi

   # Check for a dump device

   echo "Checking for standard $CMD device"
   D_DEV=`echo "$LOCAL_PASSWD
select name from sysdevices where name like \"stripe%_$DB_TO_AFFECT\"
go" | $SYBIN/isql -U sa -S $LOCAL_DSQUERY -w1000 | sed -n -e '/stripe/p' | \
      nawk '{ if (NR == 1) print "'$CMD' database '$DB_TO_AFFECT' '$VIA'", $0
                         else print "stripe on", $0
                         }'`

   if [ -z "$D_DEV" ] ; then # nothing defined... :(
      return 1
   fi

   return 0
}

SYBIN=$SYBASE/bin

gen_dumpload_command MAG_LOAD_2 thissux wcid "dump"

if [ $? -eq 1 ] ; then
   echo "Error..."
fi

# so what does this generate?  :-)
echo $D_DEV

# ... and it can be used as follows:

echo "$PASSWD
$D_DEV
go" | isql ....

exit 0

   
     _________________________________________________________________

                               Q4.4: SYBPERL FAQ
                                       
   _This is Michael Peppler's mpeppler@itf.ch FAQ._
     _________________________________________________________________
   
  Index of Sections
     * Sybperl? What is that?
          + What is Sybperl?
          + Where can I get Sybperl?
          + Can I get Sybperl for Windows/NT?
          + Can I get Sybperl for OS foo?
          + Is there a version of Sybperl for Perl 5?
     * Building and Installing
          + Where is uperl.o?
          + make test doesn't work
          + How to get Dynamic Linking under HP-UX?
          + How do I set PERL_SRC?
          + I've moved the Sybase libraries, and Sybperl won't run...
          + Sybperl won't run as a CGI script
     * Sybperl 1.x vs Sybperl 2.x
          + Are Sybperl 1.x scripts compatible with 2.x?
          + Is Sybperl 2.x stable enough to use, or should I use 1.x for
            production code?
     * Documentation
          + Is there any good sybperl documentation?
          + Is there a sybperl FAQ?
          + Is there a sybperl mailing list?
     * Improving sybperl
          + Reporting bugs
          + Feature requests
            
   
     _________________________________________________________________
   
   
   
  Sybperl? What is that?
  
   
   
    What is Sybperl?
    
   Matthew Healy wrote this in a recent comp.databases.sybase post:
   
     Perl is an interpreted programming language discussed in
     _comp.lang.perl.*_ newsgroups; though it's interpreted, the
     interpreter has an internal compiler built-in, so medium-size
     programs still run quite efficiently.
     
     It has become very popular among people who database and/or system
     administration type work for several reasons:
     * Very powerful and flexible string-manipulation facilities;
       anything you can do with awk, grep, sed, etc. can be done in Perl
       -- and it even comes with utilities that convert scripts in those
       languages into Perl!
     * No arbitrary limits on the size of any object; you can slurp up an
       entire file into a string variable if you have enough _RAM._
     * An incredibly useful feature called associative arrays, which
       often gives you a quick and easy way of doing things that would
       otherwise require going back to your data structures textbooks.
     * Versions are available for nearly every platform you've ever heard
       of.
     * It's _free_!
       
     Sybperl is a package of extensions to Perl that basically add the
     Sybase db_library API calls to the Perl language; the combination is
     an extremely powerful scripting tool for Sybase DBA's and
     programmers because it adds the existing strengths of Perl to the
     Sybase API.
     
     So, when would you use it? Anytime the combination of shell scripts
     with isql is too limited but writing a C program is overkill.
     
     In particular, since Perl has become the language of choice for many
     WWW gurus, Sybperl has become the tool of choice for integrating
     Sybase databases with the Web. And there are now some toolkits
     written in Sybperl that make it even simpler; my favorite among
     those is WDB.
     
     Here are some URLs to check out:
     http://www.sybase.com/WWW/sybase_www_faq.html
     http://www.sybase.com/WWW/
     http://www.sybase.com/WWW/sybase_www_tools.html
     http://www.sybase.com/WWW/Sybperl/index.html
     http://arch-http.hq.eso.org/bfrasmus/wdb/wdb.html
     http://arch-http.hq.eso.org/bfrasmus/wdb/distribution/install.html
     ftp://ftp.demon.co.uk/pub/perl/db/perl4/sybperl/
     ftp://ftp.demon.co.uk/pub/perl/db/mod/Sybase/
     ftp://ftp.cis.ufl.edu/pub/perl/ ftp://ftp.perl.com/
     
   
   
    Where can I get Sybperl?
    
   Sybperl is available from CPAN (the Comprehensive Perl Archive
   Network)
   
   The CPAN master is at
   
     ftp://ftp.funet.fi/pub/languages/perl/CPAN
     
   and Sybperl is archived in
   
     ftp://ftp.funet.fi/pub/languages/perl/CPAN/authors/id/MEWP/sybperl-2
     .0.tar.gz
     
   CPAN is mirrored widely, please select the CPAN site nearest you to
   keep the networks happy. At the moment the registered CPAN sites are:
     * Africa
       
     ftp://ftp.is.co.za/programming/perl/CPAN/
     * Australasia
       
     ftp://coombs.anu.edu.au/pub/perl/
     ftp://ftp.mame.mu.oz.au/pub/perl/CPAN/
     * Europe
       
     ftp://ftp.funet.fi/pub/languages/perl/CPAN/
     ftp://ftp.sunet.se/pub/lang/perl/CPAN/
     ftp://ftp.cs.ruu.nl/pub/PERL/CPAN/
     ftp://ftp.demon.co.uk/pub/mirrors/perl/CPAN/
     ftp://ftp.pasteur.fr/pub/computing/unix/perl/CPAN/
     ftp://ftp.rz.ruhr-uni-bochum.de/pub/programming/languages/perl/CPAN
     / ftp://ftp.switch.ch/mirror/CPAN/
     ftp://orpheu.ci.uminho.pt/pub/lang/perl/
     * North America
       
     ftp://ftp.cis.ufl.edu/pub/perl/CPAN/
     ftp://uiarchive.cso.uiuc.edu/pub/lang/perl/CPAN/
     ftp://ftp.delphi.com/pub/mirrors/packages/perl/CPAN/
     ftp://ftp.sedl.org/pub/mirrors/CPAN/
     ftp://ftp.sterling.com/programming/languages/perl/
     
   _Version 2.x_ is also available on Sybase's Web page:
   
     http://www.sybase.com/WWW/Sybperl/index.html
     
   I try to make sure that the Sybase Web page is up to date, but I don't
   control it... 
   
    Can I get Sybperl for Windows/NT?
    
   Perl is available for Windows/NT from
   
     ftp://ntperl.hip.com
     
   However, certain key components that are used to add modules to Perl
   are missing in the current version, which makes things difficult for
   us Sybperl users.
   
   However, I _know_ that it is possible to get Sybperl to work under
   Windows/NT (I would venture to guess that most of the work is in
   creating appropriate Makefiles).
   
   Contact me if you are interested in attempting this. 
   
    Can I get Sybperl for OS foo?
    
   Perl is primarily a Unix tool, and Sybperl was developped under SunOS.
   However, Perl has been ported to numerous other OSes (MS-DOS,
   Windows/NT, OS/2, VMS), and Sybperl should theortically be portable to
   these OSes as well, in particular with Perl 5's better extension
   mechanism.
   
   I am always ready to provide any help I can to anyone wishing to port
   Sybperl to any particular OS/platform. 
   
    Is there a version of Sybperl for Perl 5?
    
   Yes. _Sybperl 2.x_ works only with _Perl 5_.
   
   _Sybperl 1.x_ does _not_ work with _Perl 5_, as the Perl extension
   mechanism was changed with _Perl 5_.
     _________________________________________________________________
   
   
   
  Building and Installing
  
   
   
    Where is uperl.o (1.x)?
    
   _uperl.o_ is the object file that _sybperl 1.x_ needs to be linked
   with to give it access to all the Perl routines. _uperl.o_ is normally
   created when you build _perl 4.036_, but if you have run make clean
   since building, you can recreate it by running _make uperl.o_.
   
   If you have _Perl 5.x_ you need _sybperl 2.x_ 
   
    make test doesn't work
    
   
    1. The 'interfaces' file is not visible.
       
       The Sybase libraries need access to a file that contain
       information on how to connect to the server. This file is normally
       located at the root of the Sybase installation directory, and the
       Sybase libraries and programs normally use the SYBASE environement
       variable to find this directory (ie sybperl will try to find
       $SYBASE/interfaces in order to run).
    2. The userid/password combination is invalid.
       
       Edit the PWD file to add a valid userid/password combination for
       accessing your database server.
    3. The $SYBASE environment variable is incorrectly set. The
       
       Sybase::CTlib modules needs access to its _locales_ information in
       the $SYBASE/locales directory.
       
   
   
    How to get Dynamic Linking under HP-UX?
    
   
   
   The bundled C compiler that comes with HP-UX apparently can't produce
   position independant code, which is needed to build a dynamically
   loadable library under HP-UX. The solution there is to use the add-on
   ANSI-C compiler or GCC.
   
   In addition, you can't build a dynamically loadable module of
   DBlibrary v. 4.x, because it is a statically linked library, and was
   not compiled using the position independent code flag.
   
   So the end result is: to get a dynamically loadable version you need
   the Sybase System 10 OpenClient libraries, and a C compiler that is
   capable of producing position independent code. 
   
    How do I set PERL_SRC?
    
   
   
   This problem sometimes appears when building sybperl with a copy of
   Perl that has not been installed (ie from the source tree):
   
   You've run:
   
     %perl Makefile.PL
     %make
     
   and the output looked something like this:
   
     % make
     Rebuilding CTlib/Makefile ...
     /home/mpeppler/PERL/perl5.001/miniperl
     -I//home/mpeppler/PERL/perl5.001/lib \
     -e "use ExtUtils::MakeMaker; MM->runsubdirpl(qw(CTlib))" \
     INST_LIB=/home/mpeppler/PERL/perl5.001/lib
     INST_ARCHLIB=/home/mpeppler/PERL/perl5.001/lib \
     INST_EXE=./blib LINKTYPE=dynamic LIBPERL_A=libperl.a
     Unable to locate Perl source. Try setting PERL_SRC in Makefile.PL or
     on command line.
     make: *** [CTlib/Makefile] Error 2 %
     
   
   
   To do this, you need to add a parameter to the WriteMakefile() call in
   each of the Makefile.PLs (toplevel, DBlib/Makefile.PL,
   CTlib/Makefile.PL and Sybperl/Makefile.PL). The parameter should look
   like this:
   
     WriteMakefile(DISTNAME => "sybperl",
     .... other parameters,
     PERL_SRC => "/home/mpeppler/src/perl5.001",
     ...);
     
   obviously replacing "/home/mpeppler/src/perl5.001" with the
   appropriate directory on your system. 
   
    I've moved the Sybase libraries, and Sybperl won't run...
    
    The sybperl make process hard-codes the path to the Sybase shared
   libraries (libsybdb.so and friends) into the binaries (either the
   dynamically loadable modules, or the Perl executable). This is done so
   that Perl can find the libraries it needs even if the LD_LIBRARY_PATH
   environment variable which is normally used to specify special library
   directories is not set (as when running a script from cron, for
   example).
   
   This technique obviously fails when the paths to the Sybase libraries
   are changed (through an upgrade, or when moving the binaries to
   another machine with a different configuration).
   
   The solution is to:
     * Set LD_LIBRARY_PATH to include the Sybase library directories _or_
     * Link the Sybase libraries to a standard directory (such as
       /usr/lib or /usr/local/lib) which will be searched by default.
       
   
   
    Sybperl won't run as a CGI script
    
   
   
   Typical symptoms: your script runs from the command line, but fails
   when run as a CGI script.
   
   The problem is (probably) that you normally have LD_LIBRARY_PATH set
   when running Sybperl scripts, but the HTTP deamon does not normally
   set this environment variable when executing child processes. It is
   not possible to set the LD_LIBRARY_PATH environment variable in the
   script itself (the variable needs to be set before the execution
   starts), so you may need to write a small shell wrapper that sets this
   environment variable and then execs your script.
   
   Alternatively, link the Sybase shared libraries to one of the _default_
   directories (such as /usr/lib).
     _________________________________________________________________
   
   
   
  Sybperl 1.x vs Sybperl 2.x
  
   
   
   
   
    Are sybperl 1.x scripts compatible with 2.x?
    
   
   
   Yes. With these exceptions:
   
     @var means different things to Perl and to Transact-SQL. If you use
     @var as SQL code (typically: "declare @var int exec my_proc @var
     out") you need to escape the @ (as in \@var).
     
   
   
   If you were in the habit of making calls to the _sybperl 1.x_ subs
   without parens (ie &dbnextrow; instead of &dbnextrow()) then there are
   certain situations where the dbnextrow implementation in sybperl 2.x
   may try to use an invalid DBPROCESS pointer. This problem does not
   exist if your scripts always pass explicit DBPROCESS parameters.
   
   Here at ITF I've linked /usr/local/bin/perl to /usr/local/bin/sybperl
   and all my old sybperl scripts work, provided that they had a "require
   'sybperl.pl';" at the top. 
   
    Is sybperl 2.x stable enough to use, or should I use 1.x for production
    code?
    
   
   
   _Sybperl 2.x_ is composed of three modules: Sybase::DBlib,
   Sybase::CTlib and Sybase::Sybperl. The DBlib and Sybperl modules are
   stable, and I recommend their use in production code (as I've been
   doing here for over a year now).
   
   The Sybase::CTlib module is in _beta_, and the API may be slightly
   modified in the future. In addition to that, I've noticed a
   performance problem with the CTlib module which appears to make it
   quite a bit slower than the DBlib module for equivalent code. This is
   something that I still need to investigate.
   
   The advantage (if one can call it that) of staying with _sybperl 1.x_
   is that the code is frozen - both for _sybperl_ itself and for _perl_.
   This means that any bugs that you code around will not come back to
   bite you as you upgrade from one release to the other.
     _________________________________________________________________
   
   
   
  Documentation
  
   
   
    Is there any good sybperl documentation?
    
   
   
   There is a Sybperl man page (in the current release it comes out to 13
   PostScript pages), but that is definitely _not_ sufficient in itself.
   
   You need _Perl_ documentation (there are over 370 pages of
   documentation in the standard Perl 5 release...). There are several
   good Perl books (in particular 'Programming Perl', Larry Wall & Randal
   Schwartz, O'Reilly and 'Learning Perl', Randal Schwartz, O'Reilly)
   
   And you need _Sybase_ documentation, in particular the Sybase
   OpenClient/C manual (I've got mine online via the Sybooks CD-ROM) and
   the Sybase Transact-SQL manual. 
   
    Is there a sybperl FAQ?
    
   Yes - you're reading it :-)
   
   
   But, more importantly, you should get the Perl FAQ, which is posted
   monthly in comp.lang.perl.announce, and which can be ftp'd from
   
     ftp://ftp.cis.ufl.edu/pub/perl/doc/FAQ
     
   And you need the Sybase FAQ too.
   
     http://reality.sgi.com/employees/pablo_corp/Sybase_FAQ
     
   
   
    Is there a sybperl mailing list?
    
   
   
   Yes. The lists address is sybperl-l@trln.lib.unc.edu
   
     It was moved recently from _sybperl-list@itf.ch_ due to the high
     cost of supporting the list for my company.
     
   
   
   To subscribe, send a message to sybperl-l-REQUEST@trln.lib.unc.edu
   with
   
     add "your name" <your@email.address>
     
   
     _________________________________________________________________
   
   
   
  Improving sybperl
  
   
   
    Reporting bugs
    
   
   
   I have a simple bug tracking database here at ITF. You can submit bugs
   for sybperl either to myself, or directly into the bug database by
   sending an e-mail message to bugtrack@itf.ch with the following
   additional fields:
   
     bug-category: sybperl
     bug-priority: high [or medium, or low]
     bug-type: bug [or feature, or change (for behavior change) or other]
     bug-summary: One line description
     
   The remainder of the message can be used to describe the bug or
   feature request in detail, with examples, etc. 
   
    Feature requests
    
   
   
   If you have any suggestions regarding how _sybperl_ can be improved,
   please send them to me - I am always happy to try to add features :-)
   
     _________________________________________________________________

                               Q4.5: DBSCHEMA.PL
                                       
   
     _________________________________________________________________
   
   In order to use this script you must have Sybperl installed -- see
   Q4.4 for more information.
   

#! /bin/sybperl
#
#       @(#)dbschema.pl 1.11    2/22/94
#
#
# dbschema.pl   A script to extract a database structure from
#               a Sybase database
#
# Written by:   Michael Peppler (mpeppler@itf.ch)
# Last Modified:  22 Feb 1994
#
# Usage:        dbschema.pl -d database -o script.name -t pattern -s server -v
#                   where   database is self-explanatory (default: master)
#                           script.name is the output file (default: script.isq
l)
#                           pattern is the pattern of object names (in sysobjec
ts)
#                           that we will look at (default: %), and server is
#                           the server to connect to (default, the value of $EN
V{DSQUERY}).
#
#                   -v turns on a verbose switch.
#
#    Changes:   11/18/93 - bpapp - Put in interactive SA password prompt
#               11/18/93 - bpapp - Get protection information for views and
#                                  stored procedures.
#               02/22/94 - mpeppler - Merge bpapp's changes with itf version
#
#------------------------------------------------------------------------------

require 'sybperl.pl';
require '/usr/lib/perl/getopts.pl';
require '/usr/lib/perl/ctime.pl';

@nul = ('not null','null');

select(STDOUT); $| = 1;         # make unbuffered

do Getopts('d:t:o:s:v');

$opt_d = 'master' unless $opt_d;
$opt_o = 'script.isql' unless $opt_o;
$opt_t = '%' unless $opt_t;
$opt_s = $ENV{DSQUERY} unless $opt_s;

open(SCRIPT, "> $opt_o") || die "Can't open $opt_o: $!\n";
open(LOG, "> $opt_o.log") || die "Can't open $opt_o.log: $!\n";

#
# Log us in to Sybase as 'sa' and prompt for admin password.
#
print "\nAdministrative account password: ";
system("stty -echo");
chop($sapw =

   ); system("stty echo"); # Use this if your version of sybperl wants
   the DSQUERY on the dblogin line #$dbproc = &dblogin("sa", $sapw,
   $opt_s); # Use this if your version of sybperl doesn't want the
   DSQUERY on the dblogin line $dbproc = &dblogin("sa", $sapw);
   &dbuse($dbproc, $opt_d); chop($date = &ctime(time)); print
   "dbschema.pl on Database $opt_d\n"; print LOG "Error log from
   dbschema.pl on Database $opt_d on $date\n\n"; print LOG "The following
   objects cannot be reliably created from the script in $opt_o. Please
   correct the script to remove any inconsistencies.\n\n"; print SCRIPT
   "/* This Isql script was generated by dbschema.pl on $date. ** The
   indexes need to be checked: column names & index names ** might be
   truncated! */\n"; print SCRIPT "\nuse $opt_d\ngo\n"; # Change to the
   appropriate database # first, Add the appropriate user data types: #
   print "Add user-defined data types..."; print SCRIPT "/* Add
   user-defined data types: */\n\n"; &dbcmd($dbproc, "select s.length,
   s.name, st.name,\n"); &dbcmd($dbproc, " object_name(s.tdefault),\n");
   &dbcmd($dbproc, " object_name(s.domain)\n"); &dbcmd($dbproc, "from
   $opt_d.dbo.systypes s, $opt_d.dbo.systypes st\n"); &dbcmd($dbproc,
   "where st.type = s.type\n"); &dbcmd($dbproc, "and s.usertype > 100 and
   st.usertype
     _________________________________________________________________

                               Q4.6: SYBTCL FAQ
                                       
   _This is Tom Poindexter tpoindex@nyx.net FAQ._
     _________________________________________________________________
   
  Index of Sections
     * Overview
     * The enabling language platform
     * Design and commands
     * Applications
     * Information Sources
     * Availability and Support
     * About the Author
       
   
     _________________________________________________________________
   
   
   
  Overview
  
   Sybtcl is an extension to Tcl (Tool Command Language) that allows Tcl
   programs to access Sybase databases. Sybtcl adds additional Tcl
   commands to login to a Sybase server, send SQL statements, retrieve
   result sets, execute stored procedures, etc. Sybtcl simplifies Sybase
   programming by creating a high level interface on top of DB-Library.
   Sybtcl can be used to program a wide variety of applications, from
   system administration procedures to end-user applications.
     _________________________________________________________________
   
   
   
  The enabling language platform
  
   Tool Command Language, often abbreviated "Tcl" and pronounced as
   "tickle", was created by Dr. John Ousterhout at the University of
   California-Berkeley. Tcl is an interpreted script language, similar to
   Unix shell, Awk, Perl, and others. Tcl was designed to be easily
   extended, where new commands are added to the base interpreter to
   provide additional functionality. Core Tcl commands contain all of the
   usual constructs provided by most programming languages: setting and
   accessing variables, file read/write, if-then-else, do-while, function
   calls. Tcl also contains many productivity enhancing commands: list
   manipulation, associative arrays, and regular expression processing.
   
   Tcl has several features that make it a highly productive language.
   First, the language is interpreted. Interpreters allow execution
   without a compile and link step. Code can be developed with immediate
   feedback. Second, Tcl has a single data type: string. While this might
   at first glance seem to a deficiency, it avoids problems of data
   conversion and memory management. (This feature doesn't preclude Tcl
   from performing arithmetic operations.) Last, Tcl has a consistent and
   simple syntax, much the same as the Unix shell. Every Tcl statement is
   a command name, followed by arguments.
   
   Dr. Ousterhout also developed a companion Tcl extension, called Tk. Tk
   provides simplified programming of X11 applications with a Motif look
   and feel. X11 applications can be programmed with 60%-80% less code
   than equivalent Xt, Motif, or Xview programs using C or C++.
   
   Dr. Ousterhout is continuing Tcl/Tk development at Sun Microsystems.
   Current projects include porting Tcl/Tk to MS-Windows and Macintosh
   environments, creating a GUI window builder, and a byte-code compiler
   for the language.
     _________________________________________________________________
   
   
   
  Design and commands
  
   Sybtcl was designed to fill the gap between pure applications
   development tools (e.g. Apt, Powerbuilder, et.al.) and database
   administration tools, often Unix shell scripts consisting of 'isql'
   and Awk pipelines. Sybtcl extends the Tcl language with specialized
   commands for Sybase access. Sybtcl consists of a set of C language
   functions that interface DB-Library calls to the Tcl language.
   
   Instead of a simple one-to-one interface to DB-Library, Sybtcl
   provides a high-level Sybase programming interface of its own. The
   following example is a complete Sybtcl program that illustrates the
   simplified interface. It relies on the Tcl interpreter, "tclsh", that
   has been extended with Sybtcl.

  #!/usr/local/bin/tclsh
  set hand [sybconnect "mysybid" "mysybpasswd"]
  sybuse $hand pubs2
  sybsql $hand "select au_lname, au_fname from authors order by au_lname"
  sybnext $hand {
    puts [format "%s, %s" @1 @2]
  }
  sybclose $hand
  exit

   In this example, a Sybase server connection is established
   ("sybconnect"), and the "pubs" sample database is accessed ("sybuse").
   An SQL statement is sent to the server ("sybsql"), and all rows
   returned are fetched and printed ("sybnext"). Finally, the connection
   is closed ("sybclose").
   
   The same program can be made to display its output in an X11 window,
   with a few changes. The Tcl/Tk windowing shell, "wish", also extended
   with Sybtcl is used.

  #!/usr/local/bin/wish
  listbox .sql_output
  button  .exit -text exit -command exit
  pack .sql_output .exit
  set hand [sybconnect "mysybid" "mysybpasswd"]
  sybuse $hand pubs2
  sybsql $hand "select au_lname, au_fname from authors order by au_lname"
  sybnext $hand {
    .sql_output insert end  [format "%s, %s" @1 @2]
  }
  sybclose $hand

   In addition to these commands, Sybtcl includes commands to access
   return column names and datatypes ("sybcols"), return values from
   stored procedures ("sybretval"), reading and writing of "text" or
   "image" columns ("sybreadtext", "sybwritetext"), canceling pending
   results ("sybcancel"), and polling asynchronous SQL execution
   ("sybpoll").
   
   Full access to Sybase server messages is also provided. Sybtcl
   maintains a Tcl array variable which contains server messages, output
   from stored procedures ("print"), DB-Library and OS error message.
     _________________________________________________________________
   
   
   
  Applications
  
   The Sybtcl distribution includes "Wisqlite", an X11 SQL command
   processor. Wisqlite provides a typical windowing style environment to
   enter and edit SQL statements, list results of the SQL execution in a
   scrollable listbox, save or print output. In addition, menu access to
   the Sybase data dictionary is provided, listing tables in a database,
   the column names and datatypes of a table, text of stored procedures
   and triggers.
   
   Other applications included in the Sybtcl distribution include:
     * a simple graphical performance monitor
     * a version of "sp_who", with periodic refresh
     * an enhanced version of Wisqlite, "UCO/Wisql", with point/click SQL
       generation
     * an HTML template generator and CGI processor for WWW-Sybase access
       
   Sybtcl users have reported a wide variety of applications written in
   Sybtcl, ranging from end user applications to database administration
   utilities.
     _________________________________________________________________
   
   
   
  Information Sources
  
   Tcl/Tk is described in detail in "Tcl and the Tk Toolkit" by Dr. John
   Ousterhout, Addison-Wesley Publishing 1994 ISBN: 0-201-63337-X .
   Another recent publication is "Practical Programming in Tcl and Tk" by
   Brent Welch, Prentice Hall 1995 ISBN 0-13-182007-9. A forthcoming book
   on Tcl extensions, including Sybtcl, is due for publication by
   O'Reilly and Associates in early 1996.
   
   A wealth of information on Tcl/Tk is available via Internet sources:
   
     news:comp.lang.tcl
     http://www.sunlabs.com/research/tcl/
     http://www.sco.com/Technology/tcl/Tcl.html
     http://web.cs.ualberta.ca/~wade/HyperTcl/
     ftp://ftp.smli.com/pub/tcl
     ftp://ftp.aud.alcatel.com/tcl/
     
   
     _________________________________________________________________
   
   
   
  Availability and Support
  
   Tcl/Tk and Sybtcl are both released in source code form under a "BSD"
   style license. Tcl/Tk and Sybtcl may be freely used for any purpose,
   as long as copyright credit is given to the respective owners. Tcl/Tk
   can be obtained from either anonymous FTP site listed above. Sybtcl is
   located on ftp.aud.alcatel.com, which serves as the Tcl archive site
   for Internet users. Other sources are the Fall/Winter 1995 release of
   Sun's Developer CD-ROM, and the "Tcl/Tk" CD-ROM title from Walnut
   Creek.
   
   Tcl/Tk and Sybtcl can be easily configured under most modern Unix
   systems including SunOS, Solaris, HP-UX, Irix, OSF/1, AIX, SCO, et.al.
   Sybtcl requires Sybase's DB-Library, from Sybase's Open Client bundle.
   
   
   Current versions are:
     * Tcl 7.4: released July 1, 1995
     * Tk 4.0: released July 1, 1995
     * Sybtcl 2.3: released October 9, 1995
       
   The Internet newsgroup comp.lang.tcl is the focal point for support.
   The group is regularly read by developers and users alike. Authors may
   also be reached via email. Sun has committed to keeping Tcl/Tk as
   freely available software.
     _________________________________________________________________
   
   
   
  About the Author
  
   Tom Poindexter is a consultant with expertise in Unix, relational
   databases, systems and application programming. He holds a B.S. degree
   from the University of Missouri, and an M.B.A. degree from Illinois
   State University. He can be reached at tpoindex@nyx.net
     _________________________________________________________________
Q4.7: Extended Stored Procedures

----------------------------------------------------------------------------
The following stored procedures were written by Ed Barlow
ebarlow@pipeline.com and can be ftp'd from the following site:

     ftp://ftp.pnl.gov/pub/incoming/proc.tar.Z [127K]

Here's a pseudo-man page of what you get:

                         Modified Sybase Procedures

             Command                     Description

         sp__help        Better sp_help

         sp__helpdb      Database Information

         sp__helpdevice  Break down database devices into a nice
                         report

         sp__helpgroup   List groups in database by access level

         sp__helpindex   Shows indexes by table

         sp__helpsegment Segment Information

         sp__helpuser    Lists users in current database by group
                         (include aliases)

         sp__lock        Lock information

         sp__who         sp_who that fits on a page

                              Audit Procedures

              Command                     Description

         sp__auditsecurity Security Audit On Server

         sp__auditdb       Audit Current Database For Potential
                           Problems

                       System Administrator Procedures

            Command                     Description

         sp__block      Blocking processes.

         sp__dbspace    Summary of current database space
                        information.

         sp__dumpdevice Listing of Dump devices

         sp__helpdbdev  Show how Databases use Devices

         sp__helplogin  Show logins and remote logins to server

         sp__helpmirror Shows mirror information, discover broken
                        mirrors

         sp__segment    Segment Information

         sp__server     Server summary report (very useful)

         sp__vdevno     Who's who in the device world

                               DBA Procedures

             Command                     Description

         sp__badindex    give information about bad indexes (nulls,
                         bad statistics...)

         sp__collist     list all columns in database

         sp__indexspace  Space used by indexes in database

         sp__noindex     list of tables without indexes.

         sp__helpcolumns show columns for given table

         sp__helpdefault list defaults (part of objectlist)

         sp__helpobject  list objects

         sp__helpproc    list procs (part of objectlist)

         sp__helprule    list rules (part of objectlist)

         sp__helptable   list tables (part of objectlist)

         sp__helptrigger list triggers (part of objectlist)

         sp__helpview    list views (part of objectlist)

         sp__trigger     Useful synopsis report of current database
                         trigger schema

                             Reverse Engineering

             Command                     Description

         sp__revalias     get alias script for current db

         sp__revdb        get db creation script for server

         sp__revdevice    get device creation script

         sp__revgroup     get group script for current db

         sp__revindex     get indexes script for current db

         sp__revlogin     get logins script for server

         sp__revmirror    get mirroring script for server

         sp__revuser      get user script for current db

                              Other Procedures

             Command                    Description

         sp__bcp         Create unix script to bcp in/out database

         sp__date        Who can remember all the date styles?

         sp__quickstats  Quick dump of server summary information

----------------------------------------------------------------------------

                Q4.9: SQL TO DETERMINE SPACE USED FOR AN INDEX
                                       
   
     _________________________________________________________________
   
   OK, here's _sp_spaceused_ reduced to bare essentials:
   

  set nocount on
  declare @objname varchar(30)
  select  @objname = "your table"

  select  index_name = i.name,
          i.segment,
          rowtotal = rowcnt(i.doampg),
          reserved = (reserved_pgs(i.id, i.doampg) +
                      reserved_pgs(i.id, i.ioampg)),
          data = data_pgs(i.id, i.doampg),
          index_size =  data_pgs(i.id, i.ioampg),
          unused = ((reserved_pgs(i.id, i.doampg) +
                     reserved_pgs(i.id, i.ioampg)) -
                    (data_pgs(i.id, i.doampg) + data_pgs(i.id,
                                                         i.ioampg)))
  into    #space
  from    sysindexes i
  where   i.id = object_id(@objname)

   You can analyse this in a number of ways:
    1. This query should tally with _sp_spaceused @objname_:
       

  select  'reserved KB' = sum(reserved) * 2,
          'Data KB' = sum(data) * 2,
          'Index KB' = sum(index_size) * 2,
          'Unused KB' = sum(unused) * 2
  from    #space

    2. This one reports space allocation by segment:
       

  select  'segment name' = s.name,
          'reserved KB' = sum(reserved) * 2,
          'Data KB' = sum(data) * 2,
          'Index KB' = sum(index_size) * 2,
          'Unused KB' = sum(unused) * 2
  from    #space t,
          syssegments s
  where   t.segment = s.segment
  group by s.name

    3. This one reports allocations by index:
       

  select  t.index_name,
          s.name,
          'reserved KB' = reserved * 2,
          'Data KB' = data * 2,
          'Index KB' = index_size * 2,
          'Unused KB' = unused * 2
  from    #space t,
          syssegments s
  where   t.segment = s.segment

   If you leave out the where clause in the initial _select into_, you
   can analyse across the whole database.
   
   Hope this points you in the right direction.
     _________________________________________________________________

                     Q4.10: _DSQL_ - AN _ISQL_ ALTERNATIVE
                                       
   
     _________________________________________________________________
   
   _dsql_ provides command history similar to _tcsh_. It was written by
   David Joyner at NCSU.
   
   _dsql_ takes advantage of GNU's _readline_ module therefore it must
   also be downloaded in order to compile _dsql_:
   
     ftp://straylight.acs.ncsu.edu/pub/sybase/tools/dsql-1.3.tar.Z [28K]
     ftp://straylight.acs.ncsu.edu/pub/sybase/tools/readline-1.1.tar.Z
     [217K]
     
   
     _________________________________________________________________

                               Q4.11: _XSYBMON_
                                       
   
     _________________________________________________________________
   
   _xsybmon_ provides an X interface to Sybase's _sp_monitor_ output. It
   was written by David Joyner at NCSU.
   
   This software may be anonymously ftp'd from:
   
     ftp://straylight.acs.ncsu.edu/pub/sybase/tools/xsybmon-0.7a.tar.Z
     [56K]
     
   
     _________________________________________________________________

                                 Q4.12: SP_DOS
                                       
   
     _________________________________________________________________
   

/*>>>>>>>>>>>>>>>>>>>>>>>>>>> sp_dos <<<<<<<<<<<<<<<<<<<<<<<<<<<<<*/
IF OBJECT_ID('dbo.sp_dos') IS NOT NULL
    DROP PROCEDURE sp_dos
GO

CREATE PROCEDURE sp_dos
    @vcObjectName varchar(30) = NULL
AS
/***********************************************************************
* sp_dos - Display Object Scope
*       This procedure graphically displays the scope of a object in
*       the database.
*
* Copyright 1996, all rights reserved.
*
* Author:  David W. Pledger, Strategic Data Systems, Inc.
*
* Parameters
*       ----------------------------------------------------------------
*       Name            In/Out  Description
*       ----------------------------------------------------------------
*       @vcObjectName    In     Mandatory - The exact name of a single
*                               database object for which the call
*                               hierarchy is to be extracted.
*
* Selected Data
*       A sample report follows:
*       ----------------------------------------------------------------
*
*               SCOPE OF EFFECT FOR OBJECT:  ti_users
*       +------------------------------------------------------------------+
*       (T) ti_users (Trigger on table 'users')
*       |
*       +--(P) pUT_GetError
*       |  |
*       |  +--(U) ui_error
*       |
*       +--(U) BGRP
*       |
*       +--(U) user_information (See Triggers: tu_user_information)
*       |
*       +--(U) users (See Triggers: ti_users, tu_users, td_users)
*       |
*       +--(P) pUT_LUDVersion
*          |
*          +--(P) pUT_GetError
*          |  |
*          |  +--(U) ui_error
*          |
*          +--(U) BGRP_LUDVersion
*
*       <End of Sample>
*
* Return Values
*       ----------------------------------------------------------------
*       Value           Description
*       ----------------------------------------------------------------
*       < -99           Unexpected error - should never occur.
*
*       -99 to -1       Sybase **reserved** return status values.
*
*       0               Execution succeeded
*
*       1               Execution of this procedure failed.
*
*       > 1             Unexpected error - should never occur.
*
***********************************************************************/
BEGIN

    /*------------------- Local Declarations -------------------------*/
    DECLARE @iObjectID    int           /* System ID of object        */
    DECLARE @cObjectType  char(1)       /* System Object Type code    */
    DECLARE @vcName       varchar(30)   /* System Object name         */
    DECLARE @vcMsg        varchar(255)  /* Error Message if needed    */
    DECLARE @iInsTrigID   int           /* Insert Trigger ID          */
    DECLARE @iUpdTrigID   int           /* Update Trigger ID          */
    DECLARE @iDelTrigID   int           /* Delete Trigger ID          */
    DECLARE @vcErrMsg     varchar(255)  /* Error Message              */

    /* Local variables to facilitate descending the parent-child
    ** object hierarchy.
    */
    DECLARE @iCurrent     int           /* Current node in the tree   */
    DECLARE @iRoot        int           /* The root node in the tree  */
    DECLARE @iLevel       int           /* The current level          */

    /* Local variables that contain the fragments of the text to
    ** be displayed while descending the hierarchy.
    */
    DECLARE @iDotIndex    int           /* Index for locating periods */
    DECLARE @cConnector   char(3)       /* '+--'                      */
    DECLARE @cSibSpacer   char(3)       /* '|  '                      */
    DECLARE @cBar         char(1)       /* '|'                        */
    DECLARE @cSpacer      char(3)       /* '   '                      */
    DECLARE @cPrntStrng1  char(255)     /* The first string to print  */
    DECLARE @cPrntStrng2  char(255)     /* The second string to print */
    DECLARE @iLoop        int           /* Temp var used for loop     */
    DECLARE @vcDepends    varchar(255)  /* Dependency String          */
    DECLARE @iDependsItem int           /* Index to a string item     */

    /* Create a temporary table to handle the hierarchical
    ** decomposition of the task parent-child relationship.  The Stack
    ** table keeps track of where we are while the leaf table keeps
    ** track of the leaf tasks which need to be performed.
    */
    CREATE TABLE #Stack
        (iItem int,
         iLevel int)

    /*------------------- Validate Input Parameters --------------------*/
    /* Make sure the table is local to the current database. */
    IF (@vcObjectName LIKE "%.%.%") AND (SUBSTRING(@vcObjectName, 1,
            CHARINDEX(".", @vcObjectName) - 1) != DB_NAME())
        GOTO ErrorNotLocal

    /* Now check to see that the object is in sysobjects. */
    IF OBJECT_ID(@vcObjectName) IS NULL
        GOTO ErrorNotFound

    /* ---------------------- Initialization -------------------------*/

    /* Do print any rowcounts while this is in progress. */
    SET NOCOUNT ON

    /* Retrieve the object ID out of sysobjects */
    SELECT @iObjectID = O.id,
           @cObjectType = O.type
    FROM   sysobjects O
    WHERE  O.name = @vcObjectName

    /* Make sure a job exists. */
    IF NOT (@@rowcount = 1 and @@error = 0 and @iObjectID > 0)
        GOTO ErrorNotFound

    /* Initialize the print string pieces. */
    SELECT @cConnector  = "+--",
           @cSibSpacer  = "|..",
           @cBar        = "|",
           @cSpacer     = "...",
           @cPrntStrng1 = "",
           @cPrntStrng2 = ""

    /* Print a separator line. */
    PRINT " "
    PRINT "** Utility by David Pledger, Strategic Data Systems, Inc.  **"
    PRINT "**         PO Box 498, Springboro, OH  45066               **"
    PRINT " "
    PRINT "         SCOPE OF EFFECT FOR OBJECT:  %1!",@vcObjectName
    PRINT "+------------------------------------------------------------------+
"

    /* -------------------- Show the Hierarchy -----------------------*/
    /* Find the root task for this job.  The root task is the only task
    ** that has a parent task ID of null.
    */
    SELECT @iRoot = @iObjectID

    /* Since there is a root task, we can assign the first
    ** stack value and assign it a level of one.
    */
    SELECT @iCurrent = @iRoot,
           @iLevel = 1

    /* Prime the stack with the root level. */
    INSERT INTO #Stack values (@iCurrent, 1)

    /* As long as there are nodes which have not been visited
    ** within the tree, the level will be > 0.  Continue until all
    ** nodes are visited.  This outer loop descends the tree through
    ** the parent-child relationship of the nodes.
    */
    WHILE (@iLevel > 0)
    BEGIN

        /* Do any nodes exist at the current level?  If yes, process them.
        ** If no, then back out to the previous level.
        */
        IF EXISTS
            (SELECT *
             FROM   #Stack S
             WHERE  S.iLevel = @iLevel)
        BEGIN

            /* Get the smallest numbered node at the current level. */
            SELECT @iCurrent = min(S.iItem)
            FROM   #Stack S
            WHERE  S.iLevel = @iLevel

            /* Get the name and type of this node.  */
            SELECT @cObjectType = O.type,
                   @vcName = O.name,
                   @iInsTrigID = ISNULL(O.instrig, 0),
                   @iUpdTrigID = ISNULL(O.updtrig, 0),
                   @iDelTrigID = ISNULL(O.deltrig, 0)
            FROM   sysobjects O
            WHERE  O.id = @iCurrent

           /*
            *    *=================================================*    *
            *    * Print out data for this node.  (Consider        *    *
            *    * making this a separate procedure.)              *    *
            *    *=================================================*    *
            */

            /* Initialize the print strings to empty (different from NULL).
            ** @cPrntStrng1 is used to 'double space' the output and
            ** contains the necessary column connectors, but no data.
            ** @cPrntStrng2 contains the actual data at the end of the
            ** string.
            */
            SELECT @cPrntStrng1 = ""
            SELECT @cPrntStrng2 = ""

            /* Level 1 is the root node level.  All Jobs have a single
            ** root task.  All other tasks are subordinate to this task.
            ** No job may have more than one root task.
            */
            IF @iLevel = 1
            BEGIN
                /* Print data for the root node. */
                SELECT @cPrntStrng1 = "",
                       @cPrntStrng2 = "(" + @cObjectType + ") " + @vcName
            END
            ELSE /* Else part of (IF @iLevel = 1) */
            BEGIN

                /* Initialize loop variable to 2 since level one has
                ** already been processed for printing.
                */
                SELECT @iLoop = 2

                /* Look at the values on the stack at each level to
                ** determine which symbol should be inserted into the
                ** print string.
                */
                WHILE @iLoop <= @iLevel
                BEGIN

                   /* While the loop variable is less than the current
                   ** level, add the appropriate spacer to line up
                   ** the printed output.
                   */
                   IF @iLoop < @iLevel
                   BEGIN

                       /* Is there a sibling (another node which exists
                       ** at the same level) on the stack?  If so, use
                       ** one type of separator; otherwise, use another
                       ** type of separator.
                       */
                       IF EXISTS(SELECT * FROM #Stack WHERE iLevel = @iLoop)
                       BEGIN
                           SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) +
                                   @cSibSpacer
                           SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) +
                                   @cSibSpacer
                       END
                       ELSE
                       BEGIN
                           SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cSpacer
                           SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) + @cSpacer
                       END
                   END
                   ELSE /* Else part of (IF @iLoop < @iLevel) */
                   BEGIN
                       SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cBar
                       SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) +
                               @cConnector + "(" + @cObjectType + ") " +
                               @vcName
                   END

                   /* Increment the loop variable */
                   SELECT @iLoop = @iLoop + 1

                END /* While @iLoop <= @iLevel */
            END /* IF @iLevel = 1 */

            /* Spaces are inserted into the string to separate the levels
            ** into columns in the printed output.  Spaces, however, caused
            ** a number of problems when attempting to concatenate the
            ** two strings together.  To perform the concatenation, the
            ** function rtrim was used to remove the end of the string.
            ** This also removed the spaces we just added.  To aleviate
            ** this problem, we used a period (.) wherever there was
            ** supposed to be a space.  Now that we are ready to print
            ** the line of text, we need to substitute real spaces
            ** wherever there is a period in the string.  To do this,
            ** we simply look for periods and substitute spaces.  This
            ** has to be done in a loop since there is no mechanism to
            ** make this substitution in the whole string at once.
            */

            /* Find the first period. */
            SELECT @iDotIndex = charindex (".", @cPrntStrng1)

            /* If a period exists, substitute a space for it and then
            ** find the next period.
            */
            WHILE @iDotIndex > 0
            BEGIN
                /* Substitute the space */
                SELECT @cPrntStrng1 = stuff(@cPrntStrng1, @iDotIndex, 1, " ")

                /* Find the next. */
                SELECT @iDotIndex = charindex (".", @cPrntStrng1)
            END

            /* Do the same thing for the second print string. */
            SELECT @iDotIndex = charindex (".", @cPrntStrng2)
            WHILE @iDotIndex > 0
            BEGIN
                SELECT @cPrntStrng2 = stuff(@cPrntStrng2, @iDotIndex, 1, " ")
                SELECT @iDotIndex = charindex (".", @cPrntStrng2)
            END

            SELECT @vcDepends = NULL

            IF @iInsTrigID > 0
                SELECT @vcDepends = OBJECT_NAME(@iInsTrigID) + " (Insert)"

            IF @iUpdTrigID > 0
                IF @vcDepends IS NULL
                    SELECT @vcDepends = OBJECT_NAME(@iUpdTrigID) + " (Update)"
                ELSE
                    SELECT @vcDepends = @vcDepends + ", " +
                           OBJECT_NAME(@iUpdTrigID) + " (Update)"

            IF @iDelTrigID > 0
                IF @vcDepends IS NULL
                    SELECT @vcDepends = OBJECT_NAME(@iDelTrigID) + " (Delete)"
                ELSE
                    SELECT @vcDepends = @vcDepends + ", " +
                           OBJECT_NAME(@iDelTrigID) + " (Delete)"

             IF @vcDepends IS NOT NULL
                 IF @cObjectType = "T"
                     SELECT @cPrntStrng2 = @cPrntStrng2 +
                               " (Trigger on table '" + @vcDepends + "')"
                 ELSE
                     SELECT @cPrntStrng2 = @cPrntStrng2 +
                                       " (See Triggers: " + @vcDepends + ")"

            /* Remove trailing blanks from the first print string.  */
            SELECT @cPrntStrng1 = rtrim(@cPrntStrng1)
            SELECT @cPrntStrng2 = rtrim(@cPrntStrng2)

            /* Print the two strings. */
            PRINT @cPrntStrng1
            PRINT @cPrntStrng2

            /* Remove the current entry from the stack (Pop) */
            DELETE #Stack
            WHERE  #Stack.iLevel = @iLevel
            AND    #Stack.iItem = @iCurrent

            /* Add (push) to the stack all the children of the current
            ** node.
            */
            INSERT INTO #Stack
            SELECT D.depid,
                   @iLevel + 1
            FROM   sysdepends D
            WHERE  D.id = @iCurrent

            /* If any were added, then we must descend another level. */
            IF @@rowcount > 0
            BEGIN
                SELECT @iLevel = @iLevel + 1
            END

        END
        ELSE
        BEGIN
            /* We have reached a leaf node.  Move back to the previous
            ** level and see what else is left to process.
            */
            SELECT @iLevel = @iLevel - 1
        END

    END /* While (@iLevel > 0) */

    PRINT " "

    RETURN (0)

/*------------------------ Error Handling --------------------------*/
ErrorNotLocal:
    /* 17460, Table must be in the current database. */
    EXEC sp_getmessage 17460, @vcErrMsg OUT
    PRINT @vcErrMsg
    RETURN (1)

ErrorNotFound:
    /* 17461, Table is not in this database. */
    EXEC sp_getmessage 17461, @vcErrMsg OUT
    PRINT @vcErrMsg
    PRINT " "

    PRINT "Local object types and objecs are:"

    SELECT "Object Type" = type,
           "Object Name" = name
    FROM   sysobjects
    WHERE  type IN ("U","TR","P","V")
    ORDER BY type, name

    RETURN (1)

END
GO

grant execute on sp_dos to public
go

   
     _________________________________________________________________

   
     _________________________________________________________________
   
                           Q4.13: SQSH, RELEASE 1.0
                                       
   _Last Update: 22-May-1996 20:45:00 EST_
     _________________________________________________________________
   
   
   
Sybase-FAQ Notice

   You are currently reading a special Sybase-FAQified version of my home
   page. I will attempt to keep it as up-to-date as possible, however
   there is a chance that it may lag somewhat behind my personal page
   (http://www.voicenet.com/~gray/sqsh.html). Also, this version has
   been stripped of changelog and status information in order to shorten
   it up a bit for the plain-text version of the FAQ.
   
What is SQSH?

   Sqsh (pronounced skwish) is short for SQshelL (pronounced s-q-shell),
   it is intended as a replacement for the venerable 'isql' program
   supplied by Sybase. It came about due to years of frustration of
   trying to do real work with a program that was never meant to perform
   real work.
   
   Sqsh is much more than a nice prompt (a la 'dsql', from David B.
   Joyner), it is intended to provide much of the functionality provided
   by a good shell, such as variables, redirection, pipes,
   back-grounding, job control, history, command completion, and dynamic
   configuration. Also, as a by-product of the design, it is remarkably
   easy to extend and add functionality.
   
Join the sqsh mailing list

   Thanks to the tireless efforts of Pablo Sanchez (our fearless
   maintainer of the Sybase FAQ), and the excellent people at SGI, the
   sqsh mailing list is now available for use! Keep track of the latest
   developments. Offer suggestions for future additions. Offer me money
   (grin).
   
   To subscribe, send mail to external-majordomo@postofc.corp.sgi.com
   With a _body_ of:

   subscribe sqsh-users [optional e-mail address]

   Or with a _body_ of just:

   help

   for a list of commands. Once subscribed you may send mail to the
   sqsh-users mailing list by addressing your mail to:

   sqsh-users@postofc.corp.sgi.com

Availability

   Sqsh may be found on the following sites:
     * http://www.voicenet.com/~gray/sqsh-1.0.tar.gz 
     * ftp://poseidon.csci.unt.edu/pub/sqsh 
     * ftp://ftp.netcom.com/pub/he/heyjude/gray 
     * ftp://davox2.davox.com/pub/sqsh 
       
   Keep in mind that sometimes the different sites become out of sync, so
   at times the latest version may be be available at one of them.
   
   If you are wondering what the funny '.gz' extension is on the end of
   some of the files, I highly recommend that you grab a copy of
   ftp://prep.ai.mit.edu/pub/gnu/gzip-1.2.4.tar or you can get a regular
   UNIX compressed version http://www.voicenet.com/~gray/sqsh-1.0.tar.Z.
   
   I also try to keep around the previous release
   http://www.voicenet.com/~gray/sqsh-0.8.tar.gz, just in case I royally
   screw up the current release (which could happen).
   
   If you have trouble reaching any of the sites above, you can send me
   e-mail at gray@voicenet.com, I am typically pretty good about
   responding.
   
Licensing

   99% of the software that I use is free, therefore I like to give back
   in kind. Sqsh is held under the GNU General Public License (GPL) and
   therefore may be freely distributed under the terms of this license.
   
Building and Installing

   Refer to the INSTALL file for directions on installation, And, once
   again thank the GNU folk for the wonderful GNU Autoconf automated
   configuration system.
   
Commands

   Sqsh provides all commands provided by isql (such as go, reset,
   etc.)-- which wasn't hard, there aren't many of them--along with a
   large base of extended commands. Typically all commands in sqsh are
   prefixed with a '\' to avoid collision with the TSQL syntax. For
   example:

   1> \help
   Available commands:
   \abort       \alias       \buf-append  \buf-copy    \buf-edit
   \buf-get     \buf-load    \buf-save    \buf-show    \connect
   \done        \echo        \exit        \go          \help
   \history     \jobs        \kill        \loop        \quit
   \read        \reconnect   \redraw      \reset       \set
   \shell       \show        \sleep       \unalias     \wait
   \warranty
   Use '\help command' for more details

   However, for those of you that just can't stand the '\', all commands
   may be aliased to any other name that you wish via the '\alias'
   command.
   
Variables

   Variables are provided in sqsh as both a mechanism for storing and
   retrieving frequently used information as well as a method for
   configuring the behavior of the shell.
   
   For example:

    1> \set table_name="syscolumns"
    1> select "Count" = count(*) from $table_name
    2> go

    Count
    -----------
           1123
    (1 row affected)

   And:

    1> \set table_name="syscolumns"
    1> \set headers=off
    1> select count(*) from $table_name
    2> go

           1123
    (1 row affected)

   All variables are documented in-depth in the manual page for sqsh, and
   the expansion of variable within the SQL command buffer may be turned
   off and on as desired (via the $expand variable).
   
Redirection & Pipes

   How many times have you watched a result set dissapear from your
   screen because you didn't hit ^S fast enough? Well, no more. Now, any
   command available in sqsh may be redirected to/from a file or
   pipelined to another process. For example, it is now legal to type:

    1> select * from sysobjects
    2> go | grep test | more

   You may also redirect output to files and (if you are careful) can
   redirect input from files:

    1> select * from sysobjects
    2> go 2>/dev/null >/tmp/objects.txt

Backgrounding & Job Control

   Suppose you want to run a long complex query and continue to work
   while waiting for the results. With isql, the most effective way to do
   this was to run two copies of isql. With sqsh you can now do:

    1> select ... /* big nasty select */
    2> go &
    Job #1 started
    1>

   After typing 'go &', sqsh launches a child process, which reconnects
   to the database and performs the desired query. This is similar to job
   control within a standard shell except that, by default, in sqsh the
   background job's output will be deferred until the job completes. So
   when the big nasty query, above, completes you will see a message
   like:

    1> sp_helptext ....
    Job #1 completed (output pending)
    2>

   and to show the output of the job you can do:

    1> \show 1 | more

   Once again, the behavior of output deferral may be turned on and off
   via the $defer_bg variable.
   
   Sqsh also provides the commonly used job control commands available in
   such shells as csh and bash, such as \jobs (to display running jobs)
   and \kill (to terminate jobs).
   
Command Substitution

   With the 1.0 release, sqsh is slowly beginning to look more-and-more
   like a real shell with the addition of command substitution. This
   feature allows a UNIX command to substituted anywhere within a sqsh
   command or within a SQL batch simply by placing the command within
   backquotes (or ` -- this may not come out to be a backquote depending
   on which font your web browser is using). For example:

  1> select count(*) from `echo syscolumns`
  2> go | `echo more`

   Currently, sqsh allows a multi-line command within a SQL batch,
   however this is not support for command line functions as of yet. For
   example you can do:

  1> select count(*) from `echo
  2> syscolumns`
  3> go

   Whereas you _cannot_ do:

  1> select count(*) from syscolumns
  3> go | `echo
  more`

   Hopefully, in the near future I'll make sqsh smart enough to support
   line-continuations with sqsh commands. Believe it or not, it isn't
   that easy to do.
   
In-line Go

   As of release 0.5, sqsh now supports a form of in-line go, via a ;
   placed anywhere within the current line, such as:

    1> sp_who ;

   And, anything that can follow the "go" command may also follow the
   inline ;

    1> sp_who ; | more

   Sqsh even attempts to be relatively smart, and ignores semicolons
   found within single or double quotes of a single command, although it
   currently does deal with semicolons located in comments. Note, in
   order to turn this feature on, execute:

    1> \set semicolon_hack=1

History

   Sqsh provides two methods for history control, line-by-line history
   using either vi or emacs styles (via
   ftp://prep.ai.mit.edu/pub/gnu/readline-2.0.tar.gz), it also provides
   batch history, so that entire statements may be re-run or edited:

    1> \history
    ...
    (12) select name, id
             from syscolumns
             where name like "%$name%"
    (13) select distinct title, type
         from titles
         where title in
            (select title
             from titles, titleauthor, authors
             where titles.title_id = titleauthor.title_id
             and authors.state = "CA")
     ..

   Most commands support a csh-style reference to history entries via
   '!!', or '!n'.

     1> \vi !!

More Information

   If you still aren't convinced, and would like more information, see
   http://www.voicenet.com/~gray/sqsh.1.html (thanks to man2html, by
   Earl Hood).
   
Who am I?


    Scott C. Gray ( gray@voicenet.com, sgray@sybase.com ).

                               Q4.14: SP_GETDAYS
                                       
   
     _________________________________________________________________
   

use master
go

drop proc sp_getdays
go

create procedure getdays
 @days int OUTPUT,
 @date datetime=NULL
as
 declare @m int,
     @y int
 if (@date is NULL)
     select @date = getdate()
 select @m = datepart(mm, @date)
 if (@m = 2)
 begin
     select @y = datepart(yy, @date)
     if (@y % 4 = 0) and ((@y % 100 != 0) or (@y % 400 = 0))
         select @days = 29
     else
         select @days = 28
 end
 else
 begin
     if (@m > 7)
         select @m = @m - 7
     select @days = (30 + (@m & 1))
 end
 return (1)
go

grant execute on sp_getdays to public
go

   
     _________________________________________________________________
Q4.15: ddl_insert.pl

----------------------------------------------------------------------------
In order to use this script you must have Sybperl installed -- see Q4.4 for
more information.

#!/usr/local/bin/perl

# Author:  Vincent Yin  (umyin@mctrf.mb.ca)   Aug 1994    Last Modified: May 1996

chomp($basename = `basename $0`);

$usage = <<EOF;
USAGE
    $basename database userid passwd pattern [ pattern... ]

DESCRIPTION
    Prints isql scripts that would insert records into the
    tables whose names match any of the patterns in command line.  In
    other words, this program reverse engineers the data in a given
    table(s).  Roughly, it `select * from <table>', analyses the data
    and table structure, then prints out a bunch of
            insert <table> values ( ... )
    statements that would re-populate the table.  It's an alternative
    to `bcp'.  `bcp' has its limitations (e.g. one often needs to turn on
    'select into/bulk copy' option in the database before running bcp.)

    Table names are matched to <pattern> with Transact-SQL's LIKE clause.
    When more than one pattern is specified on command line, the LIKE
    clauses are OR'ed.  In any case, the LIKE clause(s) is logged to
    the beginning of the output as a comment, so that you'll see how this
    program interprets the command line.

    The SQL script is printed to stdout.  Since it only prints out the SQL
    but doesn't submit it to the SQL server, this procedure is safe to run.
    It doesn't modify database in any way.

EXAMPLES
    To print this usage page:
            % $basename
    To print SQL that populates the table master..sysobjects and systypes:
            % $basename master userid passwd 'sysobjects' 'systypes'
    To print SQL that populates all system tables in master db:
            % $basename master userid passwd 'sys%'

BUGS
    Embedded line breaks in strings are allowed in Sybase's isql, but not
    allowed in SQLAnywhere's isql.  So this script converts embedded line
    breaks (both DOS styled and UNIX styled) to blank characters.

EOF

$batchsize = 10;        # The number of INSERTs before a `go' is issued.
                        # This is to make the output compact.

# .................... No change needed below this line ........................

use Sybase::DBlib;

die $usage unless $#ARGV >= 3;
($db, $user, $passwd, @pattern) = @ARGV;

$likeclause = &sql_pattern_to_like_clause('name', @pattern);

print <<EOF;
-- This script is created by $0.
-- It would generate INSERT statements for tables whose names match the
-- following pattern:
/* $likeclause
*/

set nocount on
go
EOF

$dbh = new Sybase::DBlib $user, $passwd;
$dbh->{dbNullIsUndef} = 1;
$dbh->dbuse($db);

    # Get the list of tables.
$tablelist = $dbh->sql("select name from sysobjects
                   where type in ('S','U') and $likeclause
                   order by name
                  ");

foreach $tableref (@$tablelist) {
    $table = @$tableref[0];
    print "\n\n/*.............. $table ...............*/\n";
    print "-- ", `date`, "\n";
    print "declare \@d datetime\n";
    print "select \@d = getdate()\n";
    print "print        '       %1!    $table', \@d\ngo\n\n";
    print "truncate table $table  -- Lookout !!!!!!\ngo\n\n";

    $dbh->dbcmd("select * from $table");
    $dbh->dbsqlexec;
    $dbh->dbresults;

    while (@row = $dbh->dbnextrow()) {
        print "insert $table values(";
        for ($i=0; $i <= $#row; $i++) { # build the INSERT statement
                # Analyse datatype to decide if this column needs to be quoted.
            $coltype = $dbh->dbcoltype($i+1);
            if (!defined($row[$i])) {
                print 'NULL';    # Never quote NULL regardless of datatype
            }
            elsif ($coltype==35 or $coltype==39 or $coltype==47 or
                   $coltype==58 or $coltype==61 or $coltype==111 ){
                   # See systypes.type/name for explanation of $coltype.
                $row[$i] =~ s/\r|\n/ /g; # Handles both DOS and UNIX line breaks
                $row[$i] =~ s/"/""/g;    # Stuff double quotes
                print "\"" . $row[$i] . "\"";
            }
            else {
                print $row[$i];
            }
            print ", " unless $i == $#row;
        }
        print ")\n";            # wrap up the INSERT statement.
                                # print `go' at every $batchsize interval.
        print "go\n" unless $dbh->DBCURROW % $batchsize;
    }
    print "\ngo\n\n";           # print a `go' after the entire table is done.
    print "-- ### End for $table:  rowcount = ", $dbh->DBCURROW, "\n";
}

# ................................. sub ........................................
sub main'sql_pattern_to_like_clause {
    local($field_name, @pattern) = @_;
    $like_clause = "\t(  1 = 0 ";
    foreach (@pattern) {
        $like_clause .= "\n     or $field_name like '" . $_ . "' ";
    }
    $like_clause .= "\n\t) \n";
}

----------------------------------------------------------------------------

                             Q4.16: SP_EXAM_SPACE
                                       
   
     _________________________________________________________________
   

use master
go

if exists (select * from sysobjects where type = "P" and name ="sp_exam_space")
  drop procedure sp_exam_space
go

CREATE Procedure sp_exam_space
as
-- Purpose: To profile the space used by tables and indexes in a database.
-- Copyright (c) 1996 Robert L. Munson
-- Permission granted to Sybase Open Line on Compuserve to distribute
-- Permission granted for individual non commercial use.
-- All other rights reserved.

-- Written by Bob Munson for the Sybase Community
-- Dedicated to Unicef, the United Nations Childrens Fund 1-800-FOR-KIDS
-- If you find sp_exam_space of value, I invite your support of Unicef.

declare @tot_rows_no int
declare @tot_data_pgs_no int
declare @tot_indx_pgs_no int

/* Create a table to contain the report */
CREATE Table #report(section varchar(30),               /* Major section
of the report */
        seq int,                /* Line number within the section */
        text varchar(120)               /* Report content */
        )

CREATE Clustered Index report_pkix on #report(section,seq)

 /* Header Section */
Insert Into #report(section, seq, text)
   Select "aaa",-4,
   "sp_exam_space Report of " +  isnull(@@servername,"-- Not Named --")
+"."+db_name() +
   " on " + convert(char(13),getdate(),107)

Insert Into #report(section, seq, text)
   Select "aaa",-1, @@version

/* total space used in database */
Insert Into #report values("abc",-2,"   ")
Insert Into #report
Select "abc",-1,"Total database space"

/* Number of segments */
Insert Into #report values("are",-2,"   ")
Insert Into #report values("are",-1,"Segments - from syssegments")
Insert Into #report
select "are",segment,
"   Segment: " + convert(varchar,segment) + "   " + name from syssegments

 /* Number of objects in each segment */
Insert Into #report values("bak",-2,"   ")
Insert Into #report values("bak",-1,"Number of objects, rows and pages in
each segment")
Insert Into #report
Select "bak", segment,
"   segment " + convert(varchar,segment) +
"   " + convert(varchar,count(*)) + " objects   " +
convert(varchar,sum(rowcnt(doampg))) + " rows   " +
convert(varchar,sum(reserved_pgs(id,doampg))) + " data reserved pages   "
+
convert(varchar,sum(reserved_pgs(id,ioampg))) + " Index Reserved Pages"
from sysindexes
group by segment

/* Megs in each segment */

/* Space for data, space for clustered indexes, space for indexes, space
for chained */
insert into #report values("car",-2, "   ")
Insert Into #report Values("car",-1,"Distribution of space by indid - from
sysindexes")
Insert Into #report
Select "car", 1-abs(sign(indid-1)) +
2*sign(1+sign(indid-2))*sign(1-sign(indid-254)) +
255*(1-abs(sign(indid-255))),
"   " + convert(varchar,1-abs(sign(indid-1)) +
2*sign(1+sign(indid-2))*sign(1-sign(indid-254)) +
255*(1-abs(sign(indid-255)))) + "   " +
convert(varchar,sum(reserved_pgs(id,doampg)) +
sign(indid-1)*sum(reserved_pgs(id,ioampg))) + " pages  " +
convert(varchar,sum(reserved_pgs(id,doampg)) +
sign(indid-1)*sum(reserved_pgs(id,ioampg))/512.0) + " Megs  "
from sysindexes
group by 1-abs(sign(indid-1)) +
2*sign(1+sign(indid-2))*sign(1-sign(indid-254)) +
255*(1-abs(sign(indid-255))),sign(indid-1)

create table #dist(log_no int, count_no int, rows_no int null,pgs_no int
null)

Select @tot_rows_no=sum(rowcnt(doampg))/100,
   @tot_data_pgs_no = sum(reserved_pgs(id,doampg))/100
from sysindexes
where indid <=1 and id >=100

Select @tot_indx_pgs_no = sum(reserved_pgs(id,ioampg))/100
from sysindexes
where indid >=2 and indid <250 and id >=100

/* Distribution of #rows in tables */
Insert Into #report values("dar",-2, "   ")
Insert Into #report values("dar",-1, "Distribution of Rows - from
sysindexes (excluding system tables)")
insert into #dist
select charindex(substring(convert(varchar,rowcnt(doampg)),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,rowcnt(doampg))) ,  count(*),
sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg))
from sysindexes
where indid <=1 and id >=100
group by charindex(substring(convert(varchar,rowcnt(doampg)),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,rowcnt(doampg)))

Insert Into #report
Select "dar",log_no,
"   " +convert(varchar,count_no) + " Tables have "+
substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) + " <= #rows <= "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3) +
 "  Totals: Rows=" + convert(varchar,rows_no) + " (" +
convert(varchar,(rows_no/@tot_rows_no)) + "%)"
+ " Pages=" + convert(varchar,pgs_no) + " (" +
convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" +
" Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0))
from #dist

/* distribution of rows per page */

/* Distribution of pages for data */
truncate table #dist
Insert Into #report values("dba",-2, "   ")
Insert Into #report values("dba",-1, "Distribution of Pages - from
sysindexes (excluding system tables)")
insert into #dist
select charindex(substring(convert(varchar,reserved_pgs(id,doampg)),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,reserved_pgs(id,doampg))) ,  count(*),
sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg))
from sysindexes
where indid <=1 and id >=100
group by
charindex(substring(convert(varchar,reserved_pgs(id,doampg)),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,reserved_pgs(id,doampg)))

Insert Into #report
Select "dba",log_no,
"   " + convert(varchar,count_no) +" Tables Have " +
substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) +
" <=#pages<= "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3)
+ "  Totals: Rows=" + convert(varchar,rows_no) + " (" +
convert(varchar,(rows_no/@tot_rows_no)) + "%)"
+"  Pages=" + convert(varchar,pgs_no) + " (" +
convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" +
"  Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0))
from #dist


/* distribution of rows in indexes */
truncate table #dist
Insert Into #report values("dmv",-2, "   ")
Insert Into #report values("dmv",-1, "Distribution of Rows for indexes -
from sysindexes (excluding system tables)")

insert into #dist
select charindex(substring(convert(varchar,rowcnt(datarows.doampg)),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,rowcnt(datarows.doampg))) ,  count(*),
sum(rowcnt(datarows.doampg)),sum(reserved_pgs(ndx.id,ndx.ioampg))
from sysindexes ndx,sysindexes datarows
where ndx.indid >=2 and ndx.indid <=250 and ndx.id >=100
and ndx.id=datarows.id and datarows.indid <=1 and datarows.id>=100
group by
charindex(substring(convert(varchar,rowcnt(datarows.doampg)),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,rowcnt(datarows.doampg)))

Insert Into #report
Select "dmv",log_no,
"   " + convert(varchar,count_no) +" Indexes Have " +
substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) +
" <=#rows<= "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3)
+ "  Totals: Rows=" + convert(varchar,rows_no) + "(" +
convert(varchar,(rows_no/@tot_rows_no)) + "%)"
+ "  Pages=" + convert(varchar,pgs_no) + " (" +
convert(varchar,(pgs_no/@tot_indx_pgs_no)) + "%)" +
"  Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0))
from #dist




/* Distribution of length of rows */
insert Into #report values("ept",-2, "   ")
Insert Into #report Values("ept",-1, "Distribution of minimum length of
rows - from sysindexes (excluding system tables)")

truncate table #dist
Insert Into #dist
select charindex(substring(convert(varchar,minlen),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,minlen)) , count(*),
sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg))
from sysindexes
where indid <=1
group by charindex(substring(convert(varchar,minlen),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,minlen))

Insert into #report
Select "ept",log_no,
"   " + convert(varchar,count_no) + " Tables Have "
+substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) + " <=minlen <= "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3)
+ "  Totals: Rows=" + convert(varchar,rows_no) + " (" +
convert(varchar,(rows_no/@tot_rows_no)) + "%)"
+ "  Pages=" + convert(varchar,pgs_no) + " (" +
convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)"
+ "  Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0))
from #dist

Insert Into #report Values("fbi",-2, "   ")
Insert Into #report Values("fbi",-1,"Distribution of maxlen of rows - from
sysindexes (excluding system tables)")
truncate table #dist
Insert Into #dist
select charindex(substring(convert(varchar,maxlen),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,maxlen)) ,
count(*),sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg))
from sysindexes
where indid <=1 and id>=100
group by charindex(substring(convert(varchar,maxlen),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,maxlen))

Insert Into #report
Select "fbi",log_no,
"   " + convert(varchar,count_no) +" Tables Have " +
substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) + " <=maxlen<= "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3) +
"  Totals: Rows=" + convert(varchar,rows_no) +" (" +
convert(varchar,(rows_no/@tot_rows_no)) + "%)"
+ "  Pages=" + convert(varchar,pgs_no) +" (" +
convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" +
 " Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0))
from #dist

/* distribution of sum of the length of columns in the tables */
Insert Into #report Values("fda",-2,"   ")
Insert Into #report Values("fda",-1,"Distribution of sum of column
lengths")

/* sysprocedures - number of rows */
Insert Into #report Values("gao",-2,"   ")
Insert Into #report Values("gao",-1,"Distribution of rows in
sysprocedures")

create table #prep_dist(id_no int, type_no int, count_no int)
insert into #prep_dist
Select id,type,count(*)
from sysprocedures
group by id,type

truncate table #dist
Insert Into #dist(log_no,count_no)
select charindex(substring(convert(varchar,count_no),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,count_no)) , count(*)
from #prep_dist
group by charindex(substring(convert(varchar,count_no),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,count_no))

Insert Into #report
Select "gao",log_no,
"   " + convert(varchar,count_no) + "  Procs Have  " +
substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) + " <= #rows <= "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3)
from #dist


/* syscomments - number of rows */
Insert Into #report Values("hal",-2, "   ")
Insert Into #report Values("hal",-1,"Distribution of rows in syscomments")

truncate table #prep_dist
insert into #prep_dist
Select id,1,count(*)
from syscomments
group by id

truncate table #dist
Insert Into #dist(log_no,count_no)
select charindex(substring(convert(varchar,count_no),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,count_no)) , count(*)
from #prep_dist
group by charindex(substring(convert(varchar,count_no),1,1),
"0000111112344456789")/5 -3 +
3*datalength(convert(varchar,count_no))

Insert Into #report
Select "hal",log_no,
"   " + convert(varchar,count_no) + "  Objects Have  " +
substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) +
replicate("0",(log_no-1)/3) + "  <= #rows <=  "
+ substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) +
replicate("9",(log_no-1)/3)
from #dist


/* Generate the report */
Select "sp_exam_space Report"=text from #report
order by section, seq
go

grant execute on sp_exam_space to public
go

   
     _________________________________________________________________
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@corp.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]

----------------------------------------------------------------------

Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo
From: pablo@sgi.com (Pablo Sanchez)
Newsgroups: comp.databases.sybase,comp.answers,news.answers
Subject: Sybase FAQ: 6/8 - section 5
Supersedes: <FAQ.section_5_833665144@sgi.com>
Followup-To: comp.databases.sybase
Date: 1 Jul 1996 14:32:22 GMT
Organization: Silicon Graphics, Inc. Nederland, CO. USA
Lines: 12424
Approved: news-answers-request@MIT.EDU
Message-ID: <FAQ.section_5_836231368@sgi.com>
References: <FAQ.section_4_836231368@sgi.com>
Reply-To: pablo@sgi.com
NNTP-Posting-Host: mew.corp.sgi.com
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly
Originator: pablo@mew.corp.sgi.com
Xref: news1.ucsd.edu comp.databases.sybase:29425 comp.answers:15547 news.answers:62018

Archive-name: databases/sybase-faq/part6
URL: http://reality.sgi.com/pablo/Sybase_FAQ



   
   
                             Q5.1: TECHNICAL NEWS
                                       
 Volume 3, Number 2
May, 1994

   
     _________________________________________________________________
   
   
   
   Disclaimer: No express or implied warranty is made by SYBASE or its
   subsidiaries with regard to any recommendations or information
   presented in SYBASE Technical News. SYBASE and its subsidiaries hereby
   disclaim any and all such warranties, including without limitation any
   implied warranty of merchantability of fitness for a particular
   purpose. In no event will SYBASE or its subsidiaries be liable for
   damages of any kind resulting from use of any recommendations or
   information provided herein, including without limitation loss of
   profits, loss or inaccuracy of data, or indirect special incidental or
   consequential damages. Each user assumes the entire risk of acting on
   or utilizing any item herein including the entire cost of all
   necessary remedies.
   
   Staff
   Principal Editor: Leigh Ann Hussey
   Contributing Writers:
   John Blair, James Gath, Karen Hogoboom, Jeff Lichtman, Steve Olson,
   Benjamin von Ullrich, Elton Wildermuth, Sybase Engineering, Sybase
   Tech Support Send comments and suggestions to:
   SYBASE Technical News
   6475 Christie Avenue
   Emeryville, CA 94608
   
   This issue of the SYBASE Technical News contains new information about
   your SYBASE software. If needed, duplicate this newsletter and
   distribute it to others in your organization. Keep this newsletter
   with your SYBASE Troubleshooting Guide. 
   
Get Certified!

   
   
   A new program has been recently announced which will allow Sybase
   customers, employees, partners and others to demonstrate official
   competence in the SYBASE architecture and product set. The Certified
   SYBASE Professional (CSP) Program is targeted at client/server
   architects, designers, developers, systems and database administrators
   and support engineers. The first CSP certification is the Certified
   Sybase DBA. This certification program will be beta tested in North
   America and the UK in January. Other likely certifications will
   include Open Interfaces Developer, Application Developer and
   Architect. Professional Services is working with Customer Services and
   Support to ensure that Sybase offers a single integrated certification
   program for both development and support professionals. For more
   information contact Wendy Washington at Sybase, 510-922-0959, and ask
   for the CSP Brochure. 
   
Troubleshooting Guide Update

   
   
   Troubleshooting Guide Goes Online
   
   The new version of the Sybase Troubleshooting Guide is on its way! It
   includes over 60 new error messages, as well as new and revised
   material on many Sybase products. Due to the increasing amount of
   information, it now comes in two volumes: a Server volumen, and a
   Tools and Connectivity volume.
   
   The new Troubleshooting Guide will be available in both paper and
   electronic form. Both volumes will be included on the AnswerBase
   CD-ROM support product. The Server volume will be included on the
   first SyBooks CD-ROM publications product. Both AnswerBase and SyBooks
   CDs will be available in Q3/Q4.
   
   Mass updates of future versions of the Troubleshooting Guide will be
   provided free of charge in AnswerBase and SyBooks formats. Paper
   manuals will still be available for purchase (or print your own from
   the electronic version!).
   
   Getting Your Troubleshooting Guide Feedback
   
   The goal of the Sybase Troubleshooting Guide is to help you better use
   (and support) Sybase products, and to do so more self-sufficiently. To
   accomplish this, we need your feedback. To this end, a mail alias
   called tsg has been established. The intention of this alias is to
   allow Sybase customers and employees to comment on the Troubleshooting
   Guide by mailing tsg@sybase.com with:
     * Corrections
     * Requests for specific additions
     * Additions (i.e., written material)
     * Comments about which sections are particularly helpful
     * Comments about which sections are not clear
     * Any other input you might have
       
       tsg will not be a forum for answering questions best taken to
       Technical Support, but will be your chance to make the
       Troubleshooting Guide more useful for everyone who uses it.
       
       The next issue of the Troubleshooting Guide will be slightly
       different from previous issues. It will come in two volumes, a
       Server volume, and a Connectivity volume, and will be released in
       both hardcopy and electronic versions (Answerbase and SyBooks).
       Sybase Support Publications is considering releasing future issues
       of the Troubleshooting Guide only in electronic format; customers
       are requested to mail tsg@sybase.com to give us feedback on this
       topic.
       
   
   
803 Error Installing from CD-ROM

   
   
   Numerous customers have called about receiving 803 errors when they
   try to install SYBASE products from CD-ROM. Here are the complete
   instructions for such installations; please disseminate this
   information as widely as you wish.
   
   Step 1: Login as root.
   Step 2: Mount the CD as a filesystem in a UNIX machine. The exact
   mount
   
   command differs between platforms. In the following examples,
     * /dev/sr0 stands for your CD-ROM device name.
     * /cdrom is a local directory you have created.
     * mount usually exists in the /etc directory. You may have to cd
       
       /etc before you can issue the command. Sun 4
       
       # mount -rt hsfs /dev/sr0 /cdrom Sun_SVR4 (Solaris)
       
       # mount -o ro -F hsfs /dev/sr0 /cdrom _DEC AXP OSF_
       
       # mount -rt cdfs -o noversion /dev/sr0 /cdrom Be sure that the
       directory is writable by the world, and large enough for the
       install to complete! Don't forget to log out as root before
       continuing installation. SYBASE software should be installed by
       the sybase user in order for permissions to be set correctly. Step
       3: After you have mounted the CD, go to the directory in which it
       
       is mounted. Step 4: Run sybload -D, which is the disk-install
       version:
       
       % cd /cdrom % ls sybload sybimage % ./sybload -D sybload is an
       executable; sybimage is the global file containing the suite of
       products. sybload will prompt you for the file used for disk
       installation. Step 5: Give it the name /cdrom/sybimage. From there
       onward, the
       
       process is the same as installation from tape.
       
   
   
Hanging, Sleeping, and the "Zombie" Process

   
   
   What are the different states of SLEEP? When all processes are shown
   as SLEEPING by sp_who except the one which issued the sp_who command,
   how can a user tell the difference between hanging versus running
   processes? What is a "zombie" process and how can it be dealt with?
   
   Definitions
   
   In pre-4.9.2 SQL Servers, the output of sp_who could be difficult to
   interpret. Processes showed only one type of SLEEP status, "sleeping".
   In System 10, and 4.9.2 Rollup 2115 and above, sp_who shows four types
   of sleep along with the other possible statuses:

Value           Meaning
-----           -------
infected        The server erred with a stack trace, and the process
                got an error that would normally kill it. The process
                is infected instead of killed.

background      This process is in the background.

recv sleep      The process is waiting for input from the client.

send sleep      The process is waiting for a write to the client to complete.

alarm sleep     The process is waiting for an alarm (usually means the
                process is waiting for a waitfor command to complete).

lock sleep      The process is waiting for a lock to be granted.

sleeping        The process is waiting for something not listed above.
                This is "normal" sleep.

runnable        The process is not waiting for anything, and is ready
                to run, but is not the currently running process.

running         The process is currently running (in a multiprocessing
                system, there can be more than one such process).

stopped         The process is stopped. In ancient history (before
                version 4.0.1), all processes stopped during a
                checkpoint. Now the only time a process is in the
                stopped state is when someone is using the kill command
                on it.

bad status      There is a bad value for the status of this process.

   
   
   In uniprocessor hardware there can be only one process RUNNING and all
   other processes are either SLEEPING or RUNNABLE. The next RUNNABLE
   process gets scheduled to run after sp_who finishes. Processes sleep
   for certain events like disk I/O, network I/O, alarm, etc. If all the
   threads are shown as SLEEPING, at least one of them will become
   RUNNABLE after an event on which the thread is waiting.
   
   On a multi-processor machine, if more than one SQL Server engine is
   started, you can see more than one thread in the RUNNING state. The
   number of processes running can not exceed the number of SQL engines
   running.
   
   It is not possible to find out from sp_who output which client process
   is hung waiting for Server response. But it is possible to find out if
   any process (i.e. thread) is blocked by another by looking at the
   "blk" field of sp_who. For more details please refer to the Commands
   Reference Manual.
   
   Before System 10 -- Night of the Zombie Process
   
   Pre-System 10 SQL Servers can end up with "zombie" (unkillable
   hanging) processes if the event on which a thread is sleeping never
   happens. In this case, the thread does not run and cannot be killed.
   This anomaly existed right from the first release of 4.0 SQL Server
   until a recent Rollup of 4.9.2 (2115 and above).
   
   The problem is that the SQL Server scheduler is non-preemptive. This
   means that tasks cannot be put to sleep or woken up arbitrarily by the
   SQL Server scheduler; all task context switching is done voluntarily
   by running tasks.
   
   Pre-System 10 SQL Servers handle attention through a signal handler
   set up to catch OUT-OF-BAND data which sets a status bit in the PSS
   (Process Status Structure). This is an asynchronous event. For
   example: a task is about to go to sleep waiting for input, but the
   client cancels the query with dbcancel(). If the signal handler sets
   the bit between the time the task is going to sleep and the time it is
   actually put to sleep, then the server task sleeps forever waiting for
   the client to send some data, and the client sleeps waiting for the
   server to acknowledge the cancel request. This is the well-known
   "dbcancel problem." Another source of trouble can be a DBMS task in
   the Server which is sleeping on a network I/O from a client that just
   isn't there any more (maybe because somebody rebooted the PC on which
   the front end was running).
   
   This kind of task cannot be killed because:
     * The task must be in RUNNABLE state so that the scheduler can
       
       kill the task the next time it runs.
     * The task cannot be preempted because its state is unknown.
       
   
   
   To complicate the above scenario, if the eternally-sleeping task
   started a transaction, it may potentially hold locks on different
   pages. The only solution for older versions is to reboot the SQL
   Server.
   
   A Wooden Stake for the Zombie Process
   
   As of the 10.0 SQL Server, and 4.9.2 SQL Server Rollup 2115 and above,
   zombie processes can now be killed. The new kill command not only sets
   the bit in the PSS as it used to, but also wakes up the task if it
   determines that the task is sleeping in one of four states:
     * waiting to receive something from the client, a common state
       
       _(RECV SLEEP)_
     * waiting for a send to be completed by the network service task
       
       _(SEND SLEEP)_
     * waiting on an alarm because user did a waitfor delay command
       
       _(ALARM SLEEP)_
     * waiting on a lock (resource, logical, semaphore, etc.) (LOCK
       SLEEP)
       
   
   
   This means that any task can be cleaned up properly as if an exception
   has occurred while the task was running, provided the task is in one
   of the RECV, SEND, LOCK and ALARM sleep states. The new kill command
   can kill infected processes as well, also a new feature.
   
   The kill command can almost instantaneously kill a task that is
   sleeping on any one of the events except the fifth state: normal sleep
   (where the task is waiting for a resource to post network or disk
   I/O). This was true for older versions of SQL Server, and is still
   true. The reason for this is that all sleeps except "normal sleep"
   have well-known and well-understood backout paths; however, tasks
   sleeping on resources have a variety of different backout paths.
   
   The new kill command will:
     * set the "kill yourself" bit on the task
     * wake up the task normally
     * put the task into the runnable queue
       
       When the scheduler is ready to run the task it finds the "kill
       yourself" bit and aborts the task. For tasks that are in normal
       sleep or for running tasks, the new kill command acts exactly as
       the old kill command: it sets the kill bit in the PSS and it is up
       to the task to wake up and test the bit and decide to kill itself.
       Note that this means that the new kill command may not have an
       effect on all tasks.
       
   
   
   NOTE! If a killed task is in the midst of a transaction, the entire
   transaction will abort. All resource cleanup will occur in the task
   backout path so that no inconsistent resources are left hanging around
   that might cause the SQL Server to hang in a hibernating state and
   eventually have to be rebooted.
   
   There were regressions, caused by the new kill command's original
   implementation, which could cause the server to hang (bug 51270) or
   not completely kill the process under certain conditions (bug 48964).
   These bugs were fixed as of Rollup 2359, and can be ordered from Tech
   Support.
   
   This fix is not available on the SQL Server NLM release for Netware.
   Instead, Netware sites must use a different method for avoiding zombie
   processes.
   
   How to Eliminate Zombie Processes on SQL Server NLM
   
   To eliminate Zombie processes from the Novell SQL Server:
    1. Download from Compuserv, from the NOVLIB forum (Currently in
       
       forum 1) the STRTLI.EXE file.
    2. Execute the file STRTLI.EXE - this expands to 8 NLMs and 2
       
       documents. The NLMs are: STREAMS.NLM, TLI.NLM, SPXS.NLM,
       SPXLISFIX.NLM, SPXFSFIX.NLM, SPXSIX2.NLM, IPXS.NLM, and
       PATCHMAN.NLM. The documents are: STRTLI.DOC and PCHMN230.DOC.
    3. STRTLI.DOC contains instructions on how to load the files. Load
       
       4.2.2 of the NLM SQL Server first, then the new files from Novell.
       If you load SNLMINST after loading the new files, you will have
       written over several of the new files and will need to reload
       them.
       
   
   
DECnet Errors and Hanging Processes

   
   
   This past spring, a customer running the 4.8 version of SQL Server for
   VMS encountered a problem with hanging/sleeping server processes left
   after network/router errors caused disconnects. A change in the AST
   networking strategy for DECnet solved this problem (bug 40459 on VMS,
   bug 38156 on AXP).
   
   The behavior exhibited was that users would switch off their PC front
   ends without properly logging out of connections to the SQL Server.
   The Server would not deassign the channel and delete the mailbox on
   error exit from its connection accepting system call. The customer's
   router and the design of the client application caused so many errors
   that the Server ran out of I/O channels.
   
   Sybase Engineering modified the VMS SQL Server so that it now
   deassigns the channel and deletes the mailbox after rejecting the
   connection. This change, originally made in the System 10 release of
   SQL Server, was back-ported to the 4.9.x codeline and is available as
   part of the latest Rollup. 
   
The Problem of Null Column Names

   
   
   How does one refer, if at all, to an undefined column name resulting
   from a select into of a built-in function? In 10.0 SQL Servers, NULL
   column names are not allowed (see last issue's Special Supplement). In
   earlier SQL Servers, when a name is defined as NULL in syscolumns the
   following situation ensues:

        1> select title_id, title, price, convert(varchar(30), total_sales)
        2> into scratchdb..titletab
        3> from pubs2..titles group by title_id, convert(varchar(30),
        total_sales)
        4> go
        (18 rows affected)

   
   
   This creates a table with four columns, one with no name. Attempts to
   use the null name fail.

        1> use scratchdb
        2> go
        1> create clustered index x on titletab (title_id, "")
        2> with ignore_dup_row
        3> go
        Msg 102, Level 15, State 1:
        Server `SYBASE', Line 1:
        Incorrect syntax near ` `.
        1> select convert (varbinary, name) from syscolumns where
        id=object_id("ben")
        2> go
         --------------------------------------------------
         0x636173656e756d
         0x6c6f675f6964
         0x74696d65696e
         NULL
        (4 rows affected)

   
   
   In order to get around this problem, you may use sp_rename in an
   intuitive way, to get rid of the NULL column name as follows:

        1> sp_rename "titletab.", request_status
        2> go
        Column name has been changed.
        return status = 0)

   
   
"Too Many Open Files" on Solaris

   
   
   If you have a Sun_SVR4 (Solaris) SQL Server that reports the error
   "Too many open files" from a kernel subsystem, you will need to change
   the maximum number of file descriptors per process (NOFILES kernel
   parameter on most systems). There are two ways to reset this value:
    1. Modify your RUNSERVER script as follows, depending on your shell:
       
       sh or ksh RUNSERVER script: ulimit -n ## csh RUNSERVER script:
       limit descriptors ##
       
   
   
   where ## = the new value for file descriptors
   
   2. Run a program which calls setrlimit() to increase the

      maximum number of file descriptors and then invoke a process
      which requires a large number of fd's (file descriptors).

   
   
   Here are a sample program and makefile called set_nofiles.c to show
   you how to do this.
    1. Build the executable by typing the command (assuming you
       
       named the makefule set_nofiles.mk): make -f set_nofiles.mk
    2. Run the executable by giving it the name of any program
       
       to run along with its command line options, for example:
       set_nofiles foobar x. You can have it run startserver -fRUNSERVER
       or the dataserver program.
       
   
   
   You must run the ulimit/limit or makefule commands as root in order to
   set the maximum number of file descriptors > 1024.
   
   Note: This Procedure is Documented Under System-10 SAG Supplement for
   
   SunOS Release 5.x (SVR4) Page 4-1.
   
   ***************source for
   set_nofiles.c**************************************
   
   /* set_nofiles.c, set_nofiles, Customer Service group, 07/02/93 /*
   routine to increase the maximum number of file descriptors per process
   /* Copyright (c) 1993, Sybase, Inc., Burlington, MA 01760 /* All
   Rights Reserved /*
   /* TITLE: set_nofiles
   /*
   _/* START-HISTORY:_
   /*
   /* 02 Jul 93 edit 0 - Lance Andersen. /* Initial coding.
   /*
   _/* END-HISTORY_
   /*
   _/* START-DESCRIPTION:_
   /*
   /* set_nofiles is a program which can be run to execute the RUNSERVER
   /* file (or any other executable) in order to increase in number of /*
   file descriptors available per process inorder to avoid the OS /*
   error EMFILE (Too many open files): /* To use this program:
   /* Build as follows:
   /* cc set_nofiles.c -o set_nofiles
   /*
   /* While logged in as root, set the following ownership and
   permissions: /* chmod u+s set_nofiles
   /* chown root set_nofiles
   /*
   /* To execute:
   /* set_nofiles command
   /*
   /* When set_nofile executes, it will set the max file descriptors to
   the /* value defined by the MAX_FD #define. The program will run under
   root /* ownership while file descriptors are being changed and then
   ownership /* will revert back to the user who invoked the program (in
   order to /* prevent security breaches).
   
   _/* END-DESCRIPTION_
   /*
   _/* START-DESIGN:_
   /*
   _/* END-DESIGN_
   /*
   _/* START-FUTURES:_
   /*
   _/* END-FUTURES_
   /*
   _/* START-CODE:_
   */
   
   /*
   **********************************************************************
   */ /* Define OS include files */ /*
   **********************************************************************
   */
   
   #include <stdio.h>
   #include <ulimit.h>
   #include <sys/time.h>
   #include <sys/resource.h>
   #include <sys/types.h>
   #include <errno.h>
   
   /*
   **********************************************************************
   */ /* Define constants */ /*
   **********************************************************************
   */
   
   #define MAX_FD 3000 /* set max number of fd's per process (NOFILES) */
   
   
   main(argc, argv)

int argc;             /* Number of arguments */
char **argv;         /* arguments */

   
   
   {
   
   struct rlimit rlim; /* soft & hard resource limits */

        /* ****************************************************************** *
/
        /* Set the maximum and current value for fd's per procesess (NOFILES) *
/
        /* ****************************************************************** *
/

        rlim.rlim_max= MAX_FD;      /* hard limit */
        rlim.rlim_cur= MAX_FD;      /* soft limit */

        if(setrlimit(RLIMIT_NOFILE, &rlim)==-1)
        {

                /* Oops, we failed, print error msg and OS error number */

                fprintf(stderr,
                "OS error %d encountered while changing RLIMIT_NOFILE to %d\n",
                errno,MAX_FD);

                exit(-1);
        }

        /* ****************************************************************** *
/
        /* reset the uid to the user who invoked the program so that the      *
/
        /* process does not run as root.                                      *
/
        /* ****************************************************************** *
/

        setuid( getuid() );

        /* ****************************************************************** *
/
        /* Now execute our program passing required arguments                 *
/
        /* ****************************************************************** *
/

        if(argc >1)
                execv(*++argv, argv);
        else
                fprintf(stderr,"Warning, no argument passed to set_nofiles\n");

}     /* END -- MAIN */
     * Now the makefile *******
       
   
   
   # set_nofiles.mk
   #
   # Makefile to create set_nofiles command # Note: to run the install
   portion, you must be logged in as root
   
   _DEFAULT_CFLAGS=_
   _LIBRARIES= _
   CFLAGS= -c $(DEFAULT_CFLAGS) $(EFFLAGS) _INCLUDE= _
   
   OBJECT = set_nofiles.o
   
   BUILD_NAME=set_nofiles
   INSTALL_DIR=$(HOME)/bin
   OWNER=root
   
   #
   # Default rule to delete, build and install set_nofiles #
   all: clean build install
   
   #
   # Build the binary
   #
   build : $(OBJECT)
   
   $(CC) $(OBJECT) $(LIBRARIES) -o $(BUILD_NAME)
   
   #
   # build the binaries
   #
   
   $(OBJECT): $(OBJECT:.o=.c)
   
   $(CC) $(INCLUDE) $(CFLAGS) $<
   
   #
   # Remove all object files
   #
   clean:
   
   rm -f $(OBJECT) $(BUILD_NAME)
   
   #
   # install the product
   #
   install:
   
   cp $(BUILD_NAME) $(INSTALL_DIR)/$(BUILD_NAME); cd $(INSTALL_DIR);
   chown $(OWNER) $(BUILD_NAME); chmod u+s $(BUILD_NAME)
     _________________________________________________________________
   
   
   
Specifying TCP Port in System 10 for Solaris

   
   
   A customer installing Sybase 10.0 on Sun_SVR4 (Solaris) ran into an
   odd problem. With an original specification of TCP port 2000 for the
   SYBASE Server process, the installation failed. When the port number
   was changed to 2025, as shown in the example in the Install Guide, the
   installation worked fine. The manual suggests that any port not
   currently used between 1024 and 65535 can be specified -- what caused
   this failure?
   
   It appears that 2000 was listed in /etc/services. Round numbers like
   2000, 6000, etc. are often used for network services. We recommend,
   therefore, that you avoid using round numbers for your port numbers. 
   
Solaris 2.3, Intimate Shared Memory, and Server EBFs Revisited

   
   
   Sun Support has been passing along information to our mutual customer
   base that either EBF 2592 or EBF 2594 are required, along with their
   patch 101318-35+, to prevent certain system panics. As a result,
   customers are calling Sybase Technical Support and requesting one of
   these EBFs. Further, the 10.0.1 Server Release Notes claim that our
   customers will need EBF 2594 and Sun's patch to prevent Sun OS panics
   while running SQL Server. This article will clarify the issue and make
   some important amendments to Sun Support's information.
   
   The majority of customers will not encounter a problem after
   installing only the Solaris patch. Most customers' Servers will boot
   and run just fine. Customers who have memory configured to a value
   near (or greater than) the physical memory on the machine may
   experience problems. Generally speaking, very few customers fall into
   this group, and those that do would probably be better off configuring
   memory to a level that Intimate Shared Memory can again be used and
   the old Server booted successfully (see Summary notes).
   
   First, a word on Sybase's EBFs. There will be no EBF 2592 for 4.9.2.
   The reason is that ISM use is not a feature of our 4.9.2 Server. The
   only way ISM can be used is if the system forces all user applications
   to use ISM. If that is the case, and the Sun patch has been applied,
   the Server simply will not boot. In that case, disable this Solaris
   kernel feature.
   
   This is not the default behavior, and will only be an issue for
   customers who have modified a system startup file to enable this
   feature.
   
   In a future 4.9.2 Rollup, we may add support for ISM use via this
   Solaris feature. However, since this is a product enhancement rather
   than a bug, there will be no one-off EBF.
   
   The EBF for 10.0.1 will be EBF 2917, not EBF 2594 as was reported in
   the 10.0.1 Release Notes.
   
   Here are answers to some common questions about this issue, followed
   by a summary.
   
   Common Questions and Answers
   
   Q: What is ISM and why is it important?
    1. ISM (Intimate Shared Memory) is an enhancement in Solaris 2.X
       which allows multiple processes to share certain low level data
       structures in the Solaris Virtual Memory (VM) system. These data
       structures are not normally shared and are used for
       virtual-to-physical address translation. These translation
       mappings are loaded into the hardware Memory Management Unit (MMU)
       as required. Sharing these data structures means that less loading
       and unloading of the MMU will be required during a context switch.
       
       
       Depending on the number of engines which are running and the
       overall system load, not having to load and unload the MMU can
       have a considerable positive impact on performance.
       
       One other benefit of using ISM is that the virtual memory is
       actually locked into physical memory by the operating system. This
       means that the Server has true control over what is in memory when
       doing memory management. It also means that a user starting up an
       editor or compiler won't steal memory from the Server resulting in
       the operating system throwing the Server memory pages out to the
       system swap space. Again, this can have a considerable positive
       impact on Server performance.
    2. I assumed that as long as the Server would start up (with the Sun
       patch installed), we would see no more panics, regardless of
       whether or not the EBF is installed. Is that correct?
    3. If you have the Sun patch installed, this particular panic
       condition will not occur. User applications such as our Server
       aren't an issue. Our EBF is only required to allow for the change
       in the shmat() system call behavior to allow the Server to boot.
    4. Do you foresee any major system performance issues involved with
       the _EBF?_
    5. No. There may be two extra shmat() system calls at Server boot
       time, the performance affect of which is negligible and only a
       one-time occurrence. Of course, the Server may not use ISM in some
       instances which will affect performance. For example, right now
       the OS is letting you use ISM even when it can't lock down the
       memory. It shouldn't do that because the system will later panic
       when it tries to unlock the pages. However, you do get the
       benefits of ISM at the cost of system panics. With Sun's fix, you
       won't be using ISM as the system will be able to detect that it
       can't lock down the memory. This could have a very visible impact
       on performance. It is preferable to configure the Server to use an
       amount of memory which will allow for ISM use. No Server EBF is
       required in this case.
    6. We have received and installed a patch from Sun 101318-35+ which
       is supposed to fix Sun's half of the problem. The engineer with
       whom I spoke said that the Sun patch would not stop the panics. He
       said that only applying a Sybase EBF or changing an ISM
       configuration parameter would stop panics.
    7. This is incorrect. No user application can panic an OS. All our
       EBF will do is to allow the Server to boot.The ISM parameter the
       engineer referred to probably simply turns off ISM use altogether.
    8. The Sun engineer said that the ISM configuration change would slow
       Sybase performance appreciably and that another customer
       experiencing the same bug had chosen not to make the change.
    9. The only performance implication is that the Server won't use ISM
       if the Solaris kernel cannot lock down the memory as it needs to
       in order to prevent a later system panic.
       
       It's a simple question of resources. SQL Server 10.0 can use an OS
       feature such as ISM if the OS can supply the resource. Whether or
       not the OS can supply the resource depends on how the Server is
       configured. That is, you can't configure the Server to use 256MB
       of memory on a machine with only 128MB of physical memory and
       expect it to use ISM. If the Server won't start after installing
       the Sun fix with a given memory configuration value, then decide
       whether you want to decrease the memory to the point at which it
       will still use ISM, or if you want to skip the use of ISM
       altogether. If you choose the first option, you can use the stock
       Server and Sun's patch and your system shouldn't panic unless
       there are other Solaris bugs showing up. If you choose the second
       option, then you probably need EBF 2594 (or 2917, depending on
       your Server version).
       
       You should certainly decide whether you want to use ISM or not and
       configure memory usage accordingly. Sybase Technical Support
       recommends configuring memory to a point where ISM can be used so
       that the Server truly has control of paging activity.
       
   
   
   Summary
   
   In all cases, customers should install version 35 or later of Sun
   patch 101318. If you are running version 4.9.2 of SQL Server, you must
   not force ISM use through the Solaris kernel. If you are running a
   System 10 Server and want to use ISM, you must configure the Server to
   use an amount of memory that the Solaris kernel can lock into physical
   memory. No EBF is needed in this case. If you would rather skip ISM
   use and want to configure the Server memory to a value too large to be
   locked down by the Solaris kernel, then you'll need EBF 2594 or 2917.
   
   Generally speaking, it is preferable to configure the Server to use
   ISM memory rather than running either EBF. If the non-EBF Server will