Thursday, October 29, 2009

Are you ready? Yes, I am.

Alexey posted interesting info about sudden computer failure. Since I'm working with computers from 1987, I wan to share my experience too.

3 years ago my computer won't turn on. Possible causes could be - processor, motherboard or power unit. After some thinking I choosed to buy new power unit, and I was right - motherboard and processor were alive. I was lucky not buying new motherboard, righ?
During last 3 months I hadn't any problems, but
- videocard on my wife's computer died
- motherboard of my daughter's computer died

In first case it took 2 days to diagnoze what happened. The second took 1 day. But, anyway, daugter's computer still don't work, because I hadn't bought new MB and processor.

Alexey's case also shows that it is very important to have hardware nearby to change. So, it means that you computer (server) may stop working because something is dead
  1. processor
  2. motherboard
  3. videocard
  4. power unit
  5. hard drive
While first 4 parts you may change without affecting your system, last part, hard drive, is a core of your system. If you don't have backup, you will not be able to return server to it's working state. Yes, you can spend some time installing OS at your new hard drive, but, have you ever thought about how much time it will take?

Alexey and I speaking about desktops. Alexey had a lot of computers at home, and a netbook, so, the failure isn't so hard. Also, my daughter is "safe", because all the music and photo's at her hard-drive still undamaged.
But, really, what if the server will stop working? How much time it will take to restore it? Think about software as a hardware. Database failure can cause same damage, as broken hard drive.

p.s. right now we are at contact with the customer, who has broken hdd. Instead of 3 databases restore service produced 9 databases. Now customer need to understand what databases are the latest ones, and which one we need to repair. Also, databases are heavily damaged, so, only copies of that databases may help to restore data.

Saturday, September 26, 2009

Tips'nTricks using FBScanner

Yes, sometimes I use FBScanner too. :-)
My system is complex, because I have huge number of Firebird, InterBase and Yaffil versions. While Yaffil does not interfere with Firebird and InterBase, I need to run periodically Firebird 1.0, 1.5, 2.0, 2.1, 2.5 and InterBase 6.x, 7.0, 7.1, 7.5, 2007 and 2009. I do this by removing services records with "instsvc remove" after installation, because I don't need FB or IB as a service and run them ony as application like

fbserver -a
or
ibserver -a

To simplify this task I've created several cmd files that looks like
fb2.cmd:
call remove_all.cmd
d:\firebird2\bin\instreg install
d:\firebird2\bin\fbserver -a

and remove_all.cmd is:
d:\ib71\bin\instreg remove
d:\ib75\bin\instreg remove
d:\ib2007\bin\instreg remove gds_db
d:\ib2009\bin\instreg remove gds_db
d:\ya\bin\instreg remove
d:\intrbase\bin\instreg remove
d:\firebird\bin\instreg remove
d:\firebird2\bin\instreg remove
d:\firebird25\bin\instreg remove
...

So, if I need to run Firebird 1.5, I simply call fb15.cmd and less than in a second I have Firebird 1.5 running. If I need to run InterBase 2007, I just stop Firebird 1.5 application (shutdown) and run ib8.cmd.

Well, returning back to the FBScanner. By default it tries to find Firebird or InterBase service installed and intercept it's configuration to work on different than 3050 port. Unfortunately for the FBScanner I have only InterBase 4.1 service installed. Anyway, I leave FBScanner configuration as is, to intercept 3050 port and to redirect it to port 3052.
Then, I'm editing firebird.conf for example for the Firebird 2.1, uncommenting and changing parameter RemoteServicePort:

RemoteServicePort = 3052

So, when I start fb2.cmd my Firebird 2.1 runs and listens to port 3052, not to 3050.

So, if I will connect from any application to the Firebird, FBScanner will intercept traffic to the 3050 and will log everything is happening between Firebird server and client.

But, sometimes I don't want to intercept or watch some specific connections, or to watch connections only for specified databases. That's simple!
You need to know, that if fbclient.dll finds in the path one level above the file firebird.conf, it will use port number specified in it.

For example, if I will connect to some database with IBExpert, specifying client libriary as ...firebird2\bin\fbclient.dll, it will use port 3052 from the firebird.conf and traffic will not be intercepted by FBScanner.
Instead, if I want traffic to be intercepted by FBScanner, I need to write server name not as localhost, as usual, but as localhost/3050. This time traffic will go through FBScanner, and every statement and transactions will be monitored.

I hope this example will help you to configure Firebird and FBScanner if you want to check what your application is doing with the server.

Friday, September 25, 2009

Nostalgia

Remember our IBDeveloper Magazine, no? It was (and is) at the website www.ibdeveloper.com, but some time ago it was hacked, so, your browser may tell you that you should not open this link.

Anyway, we started to place interesting presentations about Firebird and InterBase on Scribd, and now decided to put there our IBDeveloper Magazine, all 4 issues. And, we found old lovely InterCom magazine issues (one of us a bit thrifty, or provident, if you wish) and placed there too.

If you spent years with InterBase, don't be shy to drop a tear on a keyboard while re-reading InterCom issues from the past century.

Thursday, September 17, 2009

64 bit Delphi. Who needs it?

I'm watching not only the InterBase and Firebird newsgrops and forums, but the Delphi also.
And I know that at least lot of russian Delphi programmers complaining about still non-existing support of 64 bit Windows in Delphi.

Today at DelphiFeeds.com I saw the post "64 bit tommorow – Wh/if you’ll have more than 4GB “today”?", and want to share my opinion on this. Also I wish you to vote at that post, as I did.
That post has a lot of technical replies, but I want to look at "business" point of supporting 64 bit Windows.

Yes, 64-bit operating systems are used now, but mostly for servers. But Delphi programmers write programs mostly not for the servers, but for the usual customers, working on desktop computers.

Let us look at very good Steam report:
http://store.steampowered.com/hwsurvey/

Right now ~18% of gaming computers uses 64-bit OS. But, gamers are not enterprise customers. 32-bit programs still works well at 64-bit operating systems, but 64-bit programs can't run on 32-bit OS.

Moreover, I'm sure that most of Delphi developers who wants 64-bit support really wants only to use things they have, without the details how it can be done. Maybe I will look a bit rude for someone, but I think the Joker quote can be used here:
"You know what I am? I'm a dog chasing cars. I wouldn't know what to do with one if I caught it."

Of course, some Delphi developers really needs 64-bit Delphi. But for what tasks?
  • middleware application servers
  • scientific software
  • compatibility/dll software
And that's it. 95% of software written in Delphi, or even more, designed for the end-users, who doesn't care about 32 or 64bits, and by the specific of this applications 64bit support will give nothing to them. Currenly, the more fun stuff is with multi-core processors. What stock or accounting software can utilise more than 1 core of processor? And what for? And the main question - do you know how hard to upgrade operating system for the enterprise, where lot of compatibility things need to be in count?

Interesting, that using GPU for computation gave much more capabilities and performance for the scientific applications than 64-bit systems. You may, if any, have not more than 20-30% increase of the application performance if it goes from 32 to 64bits, and only if it is optimised for that, but using GPU allows to speedup computations up to 100 times.

But, don't consider me as an orthodox person, I'm just a bit sceptic, and trying to look at things realistic. I believe in mult-cores - games can utilize up to 3 cores now! -, and I believe in 64-bit.

Monday, August 24, 2009

Firebird - 1 terabyte database

We made 1 terabyte database test with Firebird 2.1. Read more. Questions?

Friday, June 26, 2009

local protocol and multi-core processors

We found strange behavior of local protocol connection of the Firebird SuperServer 1.5 and Windows. Tests were made on AMD 2-core processor computer with command line backup like
gbak -b -g db.fdb db.fbk
When gbak is not "attached" to cores (uses all), or "attached" to another processor core where the Firebird SuperServer runs (for example, fbserver. exe at core 0, and gbak.exe at core 1), gbak nearly not loading used core, and fbserver.exe loads it's core only at 50%.
When we attach gbak.exe to the same core that uses fbserver.exe, backup speed raises nearly 2 times, and fbserver.exe loads core at 95%. Example results for gbak -b -g of 3.8 gb database:
  • fbserver and gbak on the same core - 9 min 22 sec.
  • fbserver and gbak on different cores - 15 min 41 sec.
This is an opposite to tcp protocol, when it doesn't matter on what core gbak.exe is run, and backup takes 4:10 minutes.

So, right now we do not suggest to use local protocol for Firebird 1.5, and use localhost instead. More tests on the way, stay tuned.

Wednesday, June 17, 2009

What is sort 2

Someone may make wrong conclusion from my previous post about sorting that "sorting ... mostly does writes". But, yes, I was speaking only about temporary sort files, and the sorting process itself. The whole picture of the query with PLAN SORT is the following
  • server (Firebird, InterBase) reads portions of the data from the query, does sorting of this block and writes it to the temporary file. So here we have reads from the database and writes to the temporary file.
  • after all data was read from the query (database) and sorted, server begin to send sorted data from the temporary file to user. Of course, this happens only if client application reads the resulting data, i.e. call "fetch". Here we have reads from the temporary file.
The number of writes and reads of the temporary file in this case is the same. But the number of database reads depends on the query itself and the amount of the processed records. I will speak about it later, because right now my computer is busy by some another interesting test.

Thursday, June 04, 2009

What is sort?

Inspired by discussion about sorting (PLAN SORT), did some simple tests. Right now I do not have "an article" about this, but want to show you some discovered facts:
  • sorting the temporary file (fb_sort_nnnn.tmp and ib_sort_nnnn.tmp) mostly does writes, not reads (excluding database reads and fetching data from temp file). Firebird 2.1 read/write ratio is 1:10. InterBase 2009 read/write ratio is different, and nearly 2/3.
  • turning on Windows folder/files compression for TEMP lowers sorting temp file size up to 2-4 times (depends on sorting data, I've used repeating data, sorry)
  • turning on Windows folder/files compression for TEMP increases processor load 2 times, and makes disk transfer ~4 times less.
  • Firebird 2.1 show only small (invaluable) diffrerence when sorting at compressed and uncompressed TEMP (4 min 00 sec). Compressed TEMP produces more stable timings, when test is run several times.
  • InterBase 2009 sorting speed is equal to Firebird 2.1 only on uncompressed TEMP. And it is slower at compressed TEMP (4 min 00 sec vs 5 min 00 sec). The cause is InterBase's higher read/write ratio (uncompression of blocks being read)
  • on my computer sorting 31 million records (select varchar(20) from ...) produces temp file with the size of 4.28GB (uncompressed)
  • InterBase 2009 uses bigger sorting blocks (chunks) than Firebird 2.1. By "sorting block" I mean set of records that are being sorted in memory and then written to sort file for future merge with other blocks
  • Using uncompressed TEMP InterBase 2009 loads processor less than Firebird 2.1 (35% vs 40%), but writes to disk faster (30mb/sec vs 25mb/sec)
This was a "single-user" test. Running concurrent sorting queries may produce different result. What can be said now for sure, is that if you have lot of queries with PLAN SORT, you must (!) have TEMP pointing to the separate physical drive. And maybe RAID 0 will help.
So, questions? :-)

Wednesday, March 25, 2009

InterBase 2009 lost ODS 10.1 support

InterBase for years uses Y-valve, implemented by Jim Starkey, to support previous databases in new InterBase versions.
The native ODS (On-Disk Structure) for the particular server version is the only one - the ODS of database created with this server. For example, native ODS for InterBase 6.0 is 10.0. For Firebird 1.5 - 10.1, etc.
And, as a feature, InterBase and Firebird supports at least N-1 ODS number.
The decision what ODS support to remove from the server is completely on the server developers. Firebird 2.5 still can open databases from InterBase 5.6 only because Firebird developers still keep code to support it.
But, seems that InterBase developers decided to eliminate ODS 10.1 support from the InterBase 2009. InterBase 2007 can open ODS 10.1 databases without a problem. But, InterBase 2009
  • initial release, 9.0.0.206 says "incompatible version of on-disk structure"
  • IB 2009 Update 2 (9.0.2.369) simply crashes
Is it bad, or not? I don't know. Right now InterBase 2009 uses ODS 13 for new databases. The ODS difference a bit far (13-10=3) from the Firebird (11 - 10 = 1).

But, you need to know, that you may not now open old databases (less than ODS 11) with the InterBase 2009.

p.s. I have not found any databases with ODS 10.0 on my computer, but ODS 10.0 and 10.1 differs only by some additional indices on some system tables in ODS 10.1.

Tuesday, February 10, 2009

Broken Indices

IBAnalyst since version 2.0 may report about broken or inconsistent indices. The detection of this cases is being made by checking index key count and record cound. If key count is less than records + versions, than the index is broken.

How it can happen?
At first, of course, it can happen when the database is broken, and some keys are missing. But the more realistic case for this inconsistency is indexing data when the data is being modified.
It can be easily reproduced, because index creation or re-activation passes three steps:
  1. server moves data from the table to the temp file
  2. data in the temp file is being sorted
  3. server moves sorted data to the database as an index
If there is no write lock on the table during all three steps, modified (or inserted) data will not exist in the created index. And will never be found by the index search.
So, find the big table, apply "create index" on it, then wait until temp file for the sorting will be completely created in the temp directory, and after that insert some record in the table and commit. Then try to find this record with where condition. Null. Scary?
Yes, but this is fixed in Firebird 2.0 (by write locks on table during indexing).

p.s. gfix also may detect this type of index inconsistency.