This section will give a quick overview of the onShore TimeSheet typical work and data flow. There is no set way or manner that onShore TimeSheet must be used, this is just an example in order to get the big picture:
onShore TimeSheet Work and Data Flow
Client X approaches company for a project or work.
Client X entered in accounting system or database external to onShore TimeSheet which assigns the client a unique Client Id number.
Client X entered into onShore TimeSheet by a supervisor with administration privileges.
Job(s) created (and opened) for Client X.
Users in the onShore TimeSheet system can be notified of the job's creation.
Optional cron script runs to export the job's information for importing into an external database system separate from onShore TimeSheet
User(s) log hours to the job(s).
Cron script runs to inform supervisors that jobs have hours which require their approval.
Optionally, the supervisor can do a search for unapproved hours whenever they wish through the View/Edit Hours screen.
Supervisor approves (and edits if necessary) hours.
Cron script, if optionally installed or enabled by the onShore TimeSheet administrator, exports all hours which have been approved for importing into an external database system separate from onShore TimeSheet. That way approved hours can be directly integrated with an invoicing system in order to bill the client.
Supervisor closes a job after it is completed.
There are various cron scripts which the onShore TimeSheet administrator can chose to run, depending on what your business' needs are. For example, at onShore, Inc. we have a separate business database for contacts invoicing and project tracking, so there are various scripts we've included in this distribution for exporting hours entries and jobs from PostgreSQL to a flat text file suitable for importing into this other database. In addition to those scripts, there are also scripts for backing the database up, which is described in a separate section, and advising supervisors that they have unapproved hours. This section will briefly describe the various scripts and suggest crontab entries for enabling automated operations. In a default installation these scripts can be found in /usr/sbin.
As previously mentioned, on all installations there are no crontab entries created that will run by without explicitly enabling them. If you are not installing this on a Debian Linux system, the onShore TimeSheet administrator will need to create the crontab entries from scratch. Below the descriptions for the scripts you will find suggested standard crontab entries for enabling the script as an automated operation.
timesheet-daily-report:
Reports to supervisors through e-mail. The supervisor daily report includes a summary of unapproved hours, and a summary of open jobs managed by the supervisor.
This is an example crontab entry, which will send off the reports at 1:00am every night:
0 1 * * * /usr/sbin/timesheet-daily-report
timesheet-export-hours:
Exports the hours which have been marked approved in the database, and marks the hours as "download" (and therefore not editable so the records cannot get out of sync with the other database).
This crontab entry will export hours data every weekday night at 2:00am:
0 2 * * * /usr/sbin/timesheet-export-hours
timesheet-export-jobs:
To be written.
timesheet-export-hours:
Exports any jobs which have been created since the last time was run, and marks them as "downloaded" (and thus not needing to be downloaded again), for importing into another database. Jobs are editable after being marked as "downloaded".
This crontab entry will export the new jobs data every weekday night at 1:30am:
v30 1 * * 1-5 /usr/sbin/timesheet-export-jobs
timesheet-dump:
Exports tables in the database using PostgreSQL's pg_dump (1) command for backup purposes. The text files produced contain the necessary SQL commands for re-creating the database from scratch and importing the information in the tables.
This crontab entry will export the full database every Friday evening at midnight for backup purposes:
0 0 * * 5 /usr/sbin/timesheet-dump
timesheet-load:
Imports the tables dumped from timesheet-dump into the PostgreSQL database for restoration purposes.
There is no example crontab entry, as this shouldn't be an automated operation.
onShore TimeSheet access logs are the same as the HTTP server's logs, no separate logging is used.
The application logging directory is defined in the Makefile with the LOGDIR variable.
The export programs, timesheet-export-hours and timesheet-export-jobs, will use the EXPORT_LOG variable in the Makefile.
The format of the export log file is simple. Basically it is the scripts main action (either exporthours or exportjobs), followed by a colon, the date, another colon and a space-separated list of ids, either hour_ids or job_ids. An example follows:
exporthours:03-16-1999:11955 11990 12025 12026 exportjobs:03-16-1999:13064 13065
By default, onShore TimeSheet will not be backed up, this is up to the onShore TimeSheet administrator to initialize. However, it it highly encouraged that backup is done at least once a week if your company's work flow and billing highly depends on onShore TimeSheet, not because onShore TimeSheet is inherently unstable but because accidents happen, database deletions, database corruption, disk corruption, etc. may necessitate a partial or full database restore.
Tape backups of the filesytem where the database keeps its data files will protect you from losing the database completely, but we highly suggest using the timesheet-dumpscript which has been included in the onShore TimeSheet distribution for backup operations. This script uses pg_dump (1) to dump the database into a script file containing the SQL commands necessary for recreating the database. Storing the database into this ASCII format allows you to easily restore the database using timesheet-load, which in turn runs the scripts that was dumped with psql (1), which will re-create or replace any lost data. Also, you could use the dumps from timesheet-dump for importing or creating a duplicate database on another server.
If queries start to fail, such as with automated exporting of hours and jobs, it is possible that you need to tun out PostgreSQL a bit. Depending on how PostgreSQL is compiled and configured and how large the database grows, changing some of the values for postmaster may increase performance and prevent the backend from dying on large queries.
More specifically, postmaster can be passed two values which impact it's performance. One is the number of shared-memory buffers the postmaster is to allocate for backend server processes. The value is in 8kb blocks and is passed to postmaster with a -B flag when starting the server. As an example, we have a database with about 12,000 hours records (about 1/3 as many jobs records). On certain multi-select statements postmaster will die with the default value for shared-memory buffers, so we have it allocating 128 8kb blocks on a machine with about 200 megs of ram.
The second value has to do with memory to use before resorting to using disk to do sorting. In that same example above we allocated 1024 kilobytes for each process without resorting to swapping on disk. This value is passed along to postmaster with a -S flag when starting the server.
If there is something in this document which doesn't help you and you need more support there are several options. Firstly, a onShore TimeSheet mailing list can be subscribed or posted to. The email address is <timesheet@onShore.com>. To subscribe to the mailing list you can either subscribe through the web, which also offers a searchable archive of previous posts to the mailing list. Alternatively, to join the list send an email to <timesheet-request@onShore.com> with a body which says SUBSCRIBE followed by an email address.
Secondly, the home page for onShore TimeSheet, http://www.onshore-timesheet.org will offer all the documentation for onShore TimeSheet online, as well as links for development and consulting services we offer for supporting onShore TimeSheet. This site will also be the main distribution point for releasing new distributions of the onShore TimeSheet source as it becomes available.