Question about performance

Steve McInerney subs-nsd at stedee.id.au
Tue Jun 13 10:55:20 UTC 2006


You don't mention operating system, but if Solaris, you can effectively
move the files into a memory file system by doing any/all file-io work
in /tmp; or create a new swapfs filesystem to do similar.

Under linux, you can do the same with /dev/shm


In either case this will help hugely if you are disk-io bound, and have
spare memory to play with.

Just don't forget to copy the files back out. :-)

We've done similar: linking Oracle database files from disk to their own
swapfs as part of an initial system load. Shrank an 8 hour load down to
~6 minutes; 4-way Sun V440's with 16Gb ram => 10Gb swapfs. YMMV.

BTW, if Solaris, grab and install a copy of the SEToolkit if < Solaris
10. Great for quickly discovering where bottlenecks are. Use DTrace if 10.

HTH?


- Steve

on 13/06/2006 3:40 AM Irenäus Becker said the following:
> Hello,
> 
> we use nsd version 2.3.3 with currently 53000 active zones. Within nsd.zones - file we have 9000 commented zones, so that our nsd.zones keeps 62000 entrys. The size is 3,5 Mb. All active zones together have 400000 records.
> The application "zonec" needs 5-6 minutes to rebuild nsd.database (current size: 15Mb) on a Sun Fire 280 Ultrasparc III 900MHz with 1GB Ram. 
> The load of Maschine keeps 0.1-0.2 while zonec rebuilds nsd.database.
> 
> My questions are:
> - Do you also need this time to rebuild nsd.database?
> - Why keeps the load at such low level?
> - Is it possible to accelerate the rebuild process?
> 
> Thank you very much,
> Irenäus Becker
> 




More information about the nsd-users mailing list