online_update --url 'my_local_update_server' --force -S patch-10903
or you can do what I do and write script that checks architecture and rpm -ivh http://installserver/sample-i386.rpm from a web server. The script is useable on RH or SLES for onesy twosey patches.
Friday, December 29, 2006
SLES still sucks
but this makes it suck less:
automatic update at your command-
It still doesn't do the right thing with a kernel (it upgrades instead of installs leaving modules broken and your currently running machine in bad need of a reboot), so it is dangerous in some ways.
Just FYI here is the magic:
ssh -n $HOSTNAME "which online_update && online_update -gVu http://servername/YOU/ && online_update -iV"
This executes the command "which online_update" and if that is successful runs online_update to download packages from your You server (yast2 can help you make one, that works) to the local box, then if that is successful online_updates from the packages on the local box. No other combination of switches appears to work update a machine via online_update. SLES needs to download then install.
RHEL/CentOS does the right thing with the kernel and only requires a "which yum && yum -y update" and you can run your own repositories if you use yum, like I do, so it is still better (if you use up2date, that is okay too- but yum does better with the repositories.
The reason I run "which commandname" is to avoid trying to yum a SLES box and online_update a CentOS.
I can feed the script file a list of servers and it will go patch the lot. You can save the output and have a list of patched boxes.
automatic update at your command-
It still doesn't do the right thing with a kernel (it upgrades instead of installs leaving modules broken and your currently running machine in bad need of a reboot), so it is dangerous in some ways.
Just FYI here is the magic:
ssh -n $HOSTNAME "which online_update && online_update -gVu http://servername/YOU/ && online_update -iV"
This executes the command "which online_update" and if that is successful runs online_update to download packages from your You server (yast2 can help you make one, that works) to the local box, then if that is successful online_updates from the packages on the local box. No other combination of switches appears to work update a machine via online_update. SLES needs to download then install.
RHEL/CentOS does the right thing with the kernel and only requires a "which yum && yum -y update" and you can run your own repositories if you use yum, like I do, so it is still better (if you use up2date, that is okay too- but yum does better with the repositories.
The reason I run "which commandname" is to avoid trying to yum a SLES box and online_update a CentOS.
I can feed the script file a list of servers and it will go patch the lot. You can save the output and have a list of patched boxes.
Wednesday, November 29, 2006
Serial USB on the Mac is a beating
and not a very pleasant one.
http://www.macosxhints.com/article.php?story=20060105104506687&lsrc=osxh
Is the way I finally got it to work again.
I've used the GUC-232A with good success under linux... you don't do anything just configure minicom and go. And adequate success under windows, download driver and go.
But none of the OS X USB serial drivers quite work with the prolific chipset that runs the GUC-232A and the UC-232A I have. So the directions are what I had to do on an intel iMac and a G4 12" powerbook.
Here is the reprint of the material linked just in case:
Download and Install Drivers
1. Go to Prolific's download page and download the latest Mac OS X drivers.
2. Open the Zip File
3. Mount the Disk Image
4. Open the Installer Package and install the drivers
5. Reboot
Change Kernel Extension Property List
1. Plug the GUC232A into any available USB port on your Mac
2. Open the System Profiler, in /Application -> Utilites
3. Click USB in the Contents pane
4. Select the GUC232A in the Device Tree; usually it will be listed under USB-Serial Controller
5. Remember the ProductID and VendorID, or keep the System Profiler window open
6. Open the Terminal, in /Application -> Utilites
7. Use the following command to open the Property List of the Prolific driver:
sudo nano /System/Library/Extensions/ ProlificUsbSerial.kext/Contents/Info.plist
8. Enter your admin password when asked. This is necessary; the ProlificUsbSerial kernel extension is owned by root.
9. Scroll down and find the ProductID and VendorID in the plist file
10. Change the ProductID and VendorID to match your GUC232A's ProductID and VendorID
11. The plist file needs the numbers as integer values, but System Profiler reports the numbers as hex. Use the Calculator to convert the numbers. For example, System Profiler reports the Product ID as 0x2008 and the Vendor ID as 0x0557. The integer value of ProductID is 8200 and the integer value of VendorID is 1367
12. Save the changes (Control-W) and quit (Control-X) nano
Reload Kernel Extension
1. Unplug the GUC232A
2. Use the following command to load the kernel extension:
sudo kextload /System/Library/Extensions/ ProlificUsbSerial.kext/Contents/Info.plist
3. Plug the GUC232A into any available USB port on your Mac
http://www.macosxhints.com/article.php?story=20060105104506687&lsrc=osxh
Is the way I finally got it to work again.
I've used the GUC-232A with good success under linux... you don't do anything just configure minicom and go. And adequate success under windows, download driver and go.
But none of the OS X USB serial drivers quite work with the prolific chipset that runs the GUC-232A and the UC-232A I have. So the directions are what I had to do on an intel iMac and a G4 12" powerbook.
Here is the reprint of the material linked just in case:
Download and Install Drivers
1. Go to Prolific's download page and download the latest Mac OS X drivers.
2. Open the Zip File
3. Mount the Disk Image
4. Open the Installer Package and install the drivers
5. Reboot
Change Kernel Extension Property List
1. Plug the GUC232A into any available USB port on your Mac
2. Open the System Profiler, in /Application -> Utilites
3. Click USB in the Contents pane
4. Select the GUC232A in the Device Tree; usually it will be listed under USB-Serial Controller
5. Remember the ProductID and VendorID, or keep the System Profiler window open
6. Open the Terminal, in /Application -> Utilites
7. Use the following command to open the Property List of the Prolific driver:
sudo nano /System/Library/Extensions/ ProlificUsbSerial.kext/Contents/Info.plist
8. Enter your admin password when asked. This is necessary; the ProlificUsbSerial kernel extension is owned by root.
9. Scroll down and find the ProductID and VendorID in the plist file
10. Change the ProductID and VendorID to match your GUC232A's ProductID and VendorID
11. The plist file needs the numbers as integer values, but System Profiler reports the numbers as hex. Use the Calculator to convert the numbers. For example, System Profiler reports the Product ID as 0x2008 and the Vendor ID as 0x0557. The integer value of ProductID is 8200 and the integer value of VendorID is 1367
12. Save the changes (Control-W) and quit (Control-X) nano
Reload Kernel Extension
1. Unplug the GUC232A
2. Use the following command to load the kernel extension:
sudo kextload /System/Library/Extensions/ ProlificUsbSerial.kext/Contents/Info.plist
3. Plug the GUC232A into any available USB port on your Mac
Soft Skills
If you are in business and IT (and you might be if you read this), you need to examine critically the following if you haven't already:
Frederick Brooks: The Mythical Man Month
DiMarco and Lister: Peopleware
W. Edwards Demming: Out of Crisis
Limoncelli and Hogan: The Practice of System and Network Administration
I'm not saying they are all correct or a roadmap to instant success, but everyone has information and experience that needs to be considered and examined.
Frederick Brooks: The Mythical Man Month
DiMarco and Lister: Peopleware
W. Edwards Demming: Out of Crisis
Limoncelli and Hogan: The Practice of System and Network Administration
I'm not saying they are all correct or a roadmap to instant success, but everyone has information and experience that needs to be considered and examined.
Friday, August 25, 2006
tar and ssh like peanut butter and jelly
Everybody knows the classic command to make a tar file:
tar -cvf file.tar directory_to_tar
and if you use gnu tar you can improve it with the compress in that step:
tar -zxvf file.tar directory_to_tar
or
tar -jxvf file.tar directory_to_tar
for gzip and bzip respectively...
but did you know:
tar zcvf - /directory_to_tar | ssh hostname "cat > file.tgz"
so you can tar gzip on one end and write the file on the other end of an ssh session?
Huh? Did you?
Yes I know I still owe an automounter deal... I promise it is on the way.
Too busy gardening the Silicon Rust...
tar -cvf file.tar directory_to_tar
and if you use gnu tar you can improve it with the compress in that step:
tar -zxvf file.tar directory_to_tar
or
tar -jxvf file.tar directory_to_tar
for gzip and bzip respectively...
but did you know:
tar zcvf - /directory_to_tar | ssh hostname "cat > file.tgz"
so you can tar gzip on one end and write the file on the other end of an ssh session?
Huh? Did you?
Yes I know I still owe an automounter deal... I promise it is on the way.
Too busy gardening the Silicon Rust...
Wednesday, August 02, 2006
automounter is awesome
Ever had an nfs client hang because the server rebooted or hiccupped?
Ever had a web of nfs mounts that won't come up cleanly because there isn't a good order? Like Server A mounts Server B that mounts Server C that mounts Server A (not a good practice, but all too common in crufty environments).
Well automounter can solve some of those problems.
I'll post the technical details tomorrow or the next day.
Ever had a web of nfs mounts that won't come up cleanly because there isn't a good order? Like Server A mounts Server B that mounts Server C that mounts Server A (not a good practice, but all too common in crufty environments).
Well automounter can solve some of those problems.
I'll post the technical details tomorrow or the next day.
Tuesday, August 01, 2006
MySQL stupidity
If you want to be able to troubleshoot mysql:
1) mysqlreport is very cool: http://hackmysql.com. The documentation is awesome and for a quick rush it is fabulous. After you get it installed try having it mail you a tab delimited report: mysqlreport --email user@domain.com --pass --all -tab
2) mytop is also very cool:
3) making sure your indexes fit in RAM is very cool, ls *MYI and sum them up. That amount should be less than physical ram (and less than the key_buffer_size variable from the my.cnf--- you did already check the my.cnf).
4) you can make some changes on the fly, check your variables with show variables; then you can set them with the mysql client
5) make sure your mysql install is logging :roll eyes:
1) mysqlreport is very cool: http://hackmysql.com. The documentation is awesome and for a quick rush it is fabulous. After you get it installed try having it mail you a tab delimited report: mysqlreport --email user@domain.com --pass --all -tab
2) mytop is also very cool:
3) making sure your indexes fit in RAM is very cool, ls *MYI and sum them up. That amount should be less than physical ram (and less than the key_buffer_size variable from the my.cnf--- you did already check the my.cnf).
4) you can make some changes on the fly, check your variables with show variables; then you can set them with the mysql client
5) make sure your mysql install is logging :roll eyes:
Tuesday, June 27, 2006
getting rid of standard error
scp foo server: 2>/dev/null
the descriptor 2 is for standard error.
Redirecting 2 to /dev/null makes the standard error go away.
the descriptor 2 is for standard error.
Redirecting 2 to /dev/null makes the standard error go away.
Friday, June 16, 2006
LVM and a rant on bonding and 802.1q
use LVM if you are using a modern linux.
Really. It will make your life easier.
Bonding and 802.1q configuration under linux suck right now. If I get time to experiment, I will figure out the model config. But really Redhat or Suse needs to come out with a configuration tool so you can bond interfaces (and use static or dhcp addresses) and use 802.1q vlan tagging on those interfaces (or non-bonded interfaces).
Really. It will make your life easier.
Bonding and 802.1q configuration under linux suck right now. If I get time to experiment, I will figure out the model config. But really Redhat or Suse needs to come out with a configuration tool so you can bond interfaces (and use static or dhcp addresses) and use 802.1q vlan tagging on those interfaces (or non-bonded interfaces).
Monday, May 29, 2006
Cisco 2500 router IOS upgrade
Bucket of pain. The 2500 series routers can have 16Mb of RAM and 16Mb of flash. It stores the OS, called IOS in the flash. The config goes in NVRAM and the boot stuff goes in the boot rom.
I have two routers with two banks of 8Mb flash and it was a mother to upgrade one of them.
The first router upgraded fine with the classic copy tftp: flash: syntax. It erased the old IOS and put the new one on over and away it went (only had to good with the conf reg once 0x2142 to get rid of a config with a password I forgot).
The second one was pain. The two Flash banks showed up seperate, the copy tftp: flash: spat back READ ONLY FILE SYSTEM... so on and so forth.
Here was the fix:
conf 0x2101 (this boots a rom or cut down IOS).
partition 1 16 (make one big 16Mb partition instead of two 8s)
copy tftp flash
conf 0x2102
The 2500 is great for a lab, but don't use one in production. The new ISR routers are quite nice.
I have two routers with two banks of 8Mb flash and it was a mother to upgrade one of them.
The first router upgraded fine with the classic copy tftp: flash: syntax. It erased the old IOS and put the new one on over and away it went (only had to good with the conf reg once 0x2142 to get rid of a config with a password I forgot).
The second one was pain. The two Flash banks showed up seperate, the copy tftp: flash: spat back READ ONLY FILE SYSTEM... so on and so forth.
Here was the fix:
conf 0x2101 (this boots a rom or cut down IOS).
partition 1 16 (make one big 16Mb partition instead of two 8s)
copy tftp flash
conf 0x2102
The 2500 is great for a lab, but don't use one in production. The new ISR routers are quite nice.
Tuesday, May 16, 2006
Use DNS- it's good enough for the internet
I just fixed a couple boxes that didn't know what localhost was... actually they did, but it was wrong (pointing to their actual IP address, not 127.0.0.1).
Look don't mess with host files. You don't need to. Use DNS.
If you have more than one host, use DNS with Dynamic DHCP. You can reserve IP addresses so that hosts always get the same IP, you can extend lease times, you can put all kinds of things in DNS. But if you update dynamically you will always have the forward and reverse DNS correct (A record and PTR) and you won't have stupid host file troubles like I just had.
Look don't mess with host files. You don't need to. Use DNS.
If you have more than one host, use DNS with Dynamic DHCP. You can reserve IP addresses so that hosts always get the same IP, you can extend lease times, you can put all kinds of things in DNS. But if you update dynamically you will always have the forward and reverse DNS correct (A record and PTR) and you won't have stupid host file troubles like I just had.
Friday, May 12, 2006
Cisco 3005 VPN concentrator resurrection
I found an unused 3005 going through one site's material. Since the current VPN terminates on PCs, I thought I'd get the 3005 going.
While I had the 3005 on the shelf in the lab, I found a problem. The 3005 has a public and private interface. The private interface would intermittently drop physical connection. I inspected the network jack, it looked good, no bent pins. But everytime I'd wiggle the network cable (or even move the middle of the cable), the connection would drop.
So I tore the out of warranty and service 3005 apart (don't do this, it will void your warranty). I checked the posts and solder on the network jack. It looked good, so I put the 3005 back together. While I had it apart I noticed two little silver tabs on the sides inside of the jack where the pins are. I used a very fine screwdriver and bent these two tabs out on both jacks hoping it would tighten the grip on the network cable.
Sure enough it works. I'll try to get a macro picture up soon.
While I had the 3005 on the shelf in the lab, I found a problem. The 3005 has a public and private interface. The private interface would intermittently drop physical connection. I inspected the network jack, it looked good, no bent pins. But everytime I'd wiggle the network cable (or even move the middle of the cable), the connection would drop.
So I tore the out of warranty and service 3005 apart (don't do this, it will void your warranty). I checked the posts and solder on the network jack. It looked good, so I put the 3005 back together. While I had it apart I noticed two little silver tabs on the sides inside of the jack where the pins are. I used a very fine screwdriver and bent these two tabs out on both jacks hoping it would tighten the grip on the network cable.
Sure enough it works. I'll try to get a macro picture up soon.
Tuesday, May 09, 2006
Silly Juniper...
Just got the Juniper ScreenOS Product Documentation CD Version 5.0 June 2004 Rev. B in some brand new NetScreen 50 boxes.
Either the doc CDs aren't revisioned very often or these NS50's move kinda slow... any way,
The disc is CDFS or whatever, but all the directories are 444 permissions on linux and MacOS. You can't change directories to read the PDFs when anything but root. On a linux box, at least you can be root, on a Mac it is really inconveniant. In either case, why would I want to be root when I'm reading PDFs?
Looks like nobody at Juniper uses the doc CD on different architectures. Maybe they only use windows internally or have all the docs on the webserver internally.
Either the doc CDs aren't revisioned very often or these NS50's move kinda slow... any way,
The disc is CDFS or whatever, but all the directories are 444 permissions on linux and MacOS. You can't change directories to read the PDFs when anything but root. On a linux box, at least you can be root, on a Mac it is really inconveniant. In either case, why would I want to be root when I'm reading PDFs?
Looks like nobody at Juniper uses the doc CD on different architectures. Maybe they only use windows internally or have all the docs on the webserver internally.
Monday, May 08, 2006
mount loop: cd or dvd iso
if you need to use a dvd or cd iso on your linux box just mount it loop. You can even export or serve out (via http) the mount.
mount -o loop /home/fedora/FC3-i386-DVD.iso /home/fedora/pub/mirrors
and that makes gardening a little easier...
mount -o loop /home/fedora/FC3-i386-DVD.iso /home/fedora/pub/mirrors
and that makes gardening a little easier...
Sunday, April 23, 2006
What do you do to emulate a server?
So I get one of those typical gardening compalints, "I can't communicate to my server that is running on port 3000...what is wrong?"
Well, first I "netstat -an | grep 3000", looking for a deamon listening on 3000. Nothing. No server, no communication. But I don't know the server software well enough to start it up (custom, not init.d script). So I whip out netcat (could be nc on some platforms) and start it listening as a deamon:
netcat -l -p 3000
netcat listen on port 3000. Any thing netcat recieves will be output to std out. So now from a host that should be able to connect to the server on 3000, "telnet servername 3000". Connected and things type appear on the terminal. Looks good.
Looks the the problem is the server software. Back to your application, developer.
Well, first I "netstat -an | grep 3000", looking for a deamon listening on 3000. Nothing. No server, no communication. But I don't know the server software well enough to start it up (custom, not init.d script). So I whip out netcat (could be nc on some platforms) and start it listening as a deamon:
netcat -l -p 3000
netcat listen on port 3000. Any thing netcat recieves will be output to std out. So now from a host that should be able to connect to the server on 3000, "telnet servername 3000". Connected and things type appear on the terminal. Looks good.
Looks the the problem is the server software. Back to your application, developer.
Tuesday, April 18, 2006
The benchmarks...
I promised some comparisions of SLES9 and why it has slow I/O. I haven't cleared releasing the application yet (it is a small peice of C code that opens as many files as you throw as an argument and then writes to those files). In place of that code, a nice workabel substitute is substitue a one large file write: "time dd if=/dev/zero of=/tmp/testfile bs=16k count=65536 ". You can also try reads, but that is more divergent based on caching the filesystem, if you bench reads, reboot between benchmarks (or otherwise flush all cache and buffers).
I've benchmarked this on HP servers (and a couple desktops). I've tried different filesystems, different kernel versions and different I/O subsystems (SCSI, SATA, ATA). The numbers pretty much go about the same way (except kernel version as you will see). Apparently Redhat backported the patch or knows about the bug and fixes it in their kernel.
Here are some benchmarks all on the same HP DL140 hardware (2.8 Xeon, 1G ram, sata drive, 11211 Bogomips):
Centos/RHEL4 perform similarly to FC 5. You can see reiser is slightly faster than ext3 on the SUSE test, but it doesn't matter as they are blown away by a good kernel. The interesting thing is SuSE/Novell didn't really want to hear about this when I tried to open a ticket. I'll be trying again. I have benchmarks from DL380's and a reproduceable method, that doesn't rely on the C program, just dd (you can also produce the bug with sort and some other ways).
The nice thing here is we can double our performance by going to a new distribution.
The dismal thing is a 500$ desktop (1.7 P4 Celeron 512 Mb of ram, ata drive) with FC 5 was able to perform on par with a DL385 (~10,000$) dual Opteron, 8GB of ram, 6 drive SCSI raid array, on the first run. And able to beat the Opteron with multiple runs. That means this silicon garden is poorly optimized and utilized.
I've benchmarked this on HP servers (and a couple desktops). I've tried different filesystems, different kernel versions and different I/O subsystems (SCSI, SATA, ATA). The numbers pretty much go about the same way (except kernel version as you will see). Apparently Redhat backported the patch or knows about the bug and fixes it in their kernel.
Here are some benchmarks all on the same HP DL140 hardware (2.8 Xeon, 1G ram, sata drive, 11211 Bogomips):
Hardware | OS | FS Type | time a.out 1000 | user | sys | Notes |
---|---|---|---|---|---|---|
dl140 | SLES 9 | reiser | 5m15.042s | 0m38.427s | 0m8.303s | unresponsive after a few seconds and well after test ls will hang |
dl140 | SLES 9 | ext3 | 5m31.042s | 0m38.427s | 0m8.303s | unresponsive after a few seconds and well after test ls will hang |
dl140 | SLES 9 | reiser | 3m53.546s | 0m44.687s | 0m3.052s | 2.6.9 kernel unresponsive |
dl140 | SLES 9 | reiser | 2m51.070s | 0m44.687s | 0m3.052s | 2.6.16.1 vanilla kernel responsive. |
dl140 | FC 5 | ext3 | 1m52.354s | 0m44.515s | 0m7.124s | responsive. |
dl140 | FC 5 | ext3 | 1m52.354s | 0m44.515s | 0m7.124s | run 5 instances still responsive |
Centos/RHEL4 perform similarly to FC 5. You can see reiser is slightly faster than ext3 on the SUSE test, but it doesn't matter as they are blown away by a good kernel. The interesting thing is SuSE/Novell didn't really want to hear about this when I tried to open a ticket. I'll be trying again. I have benchmarks from DL380's and a reproduceable method, that doesn't rely on the C program, just dd (you can also produce the bug with sort and some other ways).
The nice thing here is we can double our performance by going to a new distribution.
The dismal thing is a 500$ desktop (1.7 P4 Celeron 512 Mb of ram, ata drive) with FC 5 was able to perform on par with a DL385 (~10,000$) dual Opteron, 8GB of ram, 6 drive SCSI raid array, on the first run. And able to beat the Opteron with multiple runs. That means this silicon garden is poorly optimized and utilized.
Tuesday, April 11, 2006
Two ways to sync files on servers.
Two ways to synchronize files:
rsync over ssh (superhandy with keys and a key agent):
rsync -av source destination
Where source and destination are ssh style: username@host:/path/to/
Another nice way to sync files is rdist.
rdist allows you to sync files to many nodes from a master (it is easy to set up and configure).
It is very handy. It is probably included with your distribution, it is definately used by several webservice providers and the homepage is here: http://www.magnicomp.com/rdist/
rsync over ssh (superhandy with keys and a key agent):
rsync -av source destination
Where source and destination are ssh style: username@host:/path/to/
Another nice way to sync files is rdist.
rdist allows you to sync files to many nodes from a master (it is easy to set up and configure).
It is very handy. It is probably included with your distribution, it is definately used by several webservice providers and the homepage is here: http://www.magnicomp.com/rdist/
SLES9 kernel compile
I'm still getting to the bottom of why I/O on SLES 9 is so pathetic (future article with benchmarks on various HP server hardware). But I'm testing many kernel compiles and there wasn't a good recipe for SLES9 and vanilla 2.6 kernels. I've tested this with 2.6.16.1, 2.6.9, 2.6.12.6.
Sles 9 kernel build:
make sure you have the kernel source on the machine and gcc.
yast2 -i kernel-source gcc
then get the linux kernel and untar it (replace the kernel.org/foo.kernel stuff with a real path to a kernel):
wget http://kernel.org/foo.kernel.bz
tar -jxvf linux-2.6.6.1.tar.bz2
then move the kernel to /usr/src/
mv kernel-foo /usr/src/
remove old kernel build sim link:
rm /usr/src/linux
make new symlink to the new kernel:
ln -s /usr/src/kernel-foo /usr/src/linux
If you want to build a kernel with similar options to a SLES Kernel (building one with the old config), then you need to get and old .config file.
If you are running a plain SLES kernel:
zcat /proc/config.gz > /usr/src/linux/.config
or you can copy the .config file from a SLES kernel source:
cp /usr/src/linux-2.6.5-7.244/.config /usr/src/linux/.config
Now you have one more interactive part:
make oldconfig
That will ask questions about new options in the kernel that are not covered by the old .config only. You may be able to take all defaults.
I will break down the next steps with commentary, but the rest does not necessarily need intervention, so I usually stack them on a command line (see below):
Clean up configs:
make clean
Make a bootable linux image:
make bzImage
Compile loadable kernel modules (these are the same as drivers usually):
make modules
Install said drivers:
make modules_install
This makes a module dependency map and if it isn't done right you might not boot. The 2.6.16.1 will need to be replaced with the kernel version.
depmod -ae -F System.map 2.6.16.1
Install everything:
make install
So a line to do all of the post interactive stuff with a 2.6.12.6 kernel would look like:
make oldconfig && make clean && make bzImage && make modules && make modules_install && depmod -ae -F System.map 2.6.12.6 && make install
And then you might have to wait for a while. If you use grub as a bootloader, you shouldn't have to do anything else to run your new kernel except reboot. This whole process is so much easier than 2.0, 2.2 or 2.4 kernels. It still isn't as easy as apt-get upgrade kernel or yum update kernel or emerge or whatever, but such is the price you pay for SuSE.
Sles 9 kernel build:
make sure you have the kernel source on the machine and gcc.
yast2 -i kernel-source gcc
then get the linux kernel and untar it (replace the kernel.org/foo.kernel stuff with a real path to a kernel):
wget http://kernel.org/foo.kernel.bz
tar -jxvf linux-2.6.6.1.tar.bz2
then move the kernel to /usr/src/
mv kernel-foo /usr/src/
remove old kernel build sim link:
rm /usr/src/linux
make new symlink to the new kernel:
ln -s /usr/src/kernel-foo /usr/src/linux
If you want to build a kernel with similar options to a SLES Kernel (building one with the old config), then you need to get and old .config file.
If you are running a plain SLES kernel:
zcat /proc/config.gz > /usr/src/linux/.config
or you can copy the .config file from a SLES kernel source:
cp /usr/src/linux-2.6.5-7.244/.config /usr/src/linux/.config
Now you have one more interactive part:
make oldconfig
That will ask questions about new options in the kernel that are not covered by the old .config only. You may be able to take all defaults.
I will break down the next steps with commentary, but the rest does not necessarily need intervention, so I usually stack them on a command line (see below):
Clean up configs:
make clean
Make a bootable linux image:
make bzImage
Compile loadable kernel modules (these are the same as drivers usually):
make modules
Install said drivers:
make modules_install
This makes a module dependency map and if it isn't done right you might not boot. The 2.6.16.1 will need to be replaced with the kernel version.
depmod -ae -F System.map 2.6.16.1
Install everything:
make install
So a line to do all of the post interactive stuff with a 2.6.12.6 kernel would look like:
make oldconfig && make clean && make bzImage && make modules && make modules_install && depmod -ae -F System.map 2.6.12.6 && make install
And then you might have to wait for a while. If you use grub as a bootloader, you shouldn't have to do anything else to run your new kernel except reboot. This whole process is so much easier than 2.0, 2.2 or 2.4 kernels. It still isn't as easy as apt-get upgrade kernel or yum update kernel or emerge or whatever, but such is the price you pay for SuSE.
Saturday, April 01, 2006
Sending attachments from a script or Linux CLI
mutt -s SUBJECT -a ATTACHMENT user@domain.com
is a nice way. I like mutt as a mail user agent. It can do pgp and stuff too if you need to.
mailx is another way. I prefer mutt, but now you have at least two ways to send attachments from a linux script or command line.
UPDATE:
A third way posted by anonymous in the comments:
uuencode local_file.name remote_file.name | mail -s "file attachment" user@example.com
if you have uuencode installed and mail, it works very well. Thanks anonymous!
is a nice way. I like mutt as a mail user agent. It can do pgp and stuff too if you need to.
mailx is another way. I prefer mutt, but now you have at least two ways to send attachments from a linux script or command line.
UPDATE:
A third way posted by anonymous in the comments:
uuencode local_file.name remote_file.name | mail -s "file attachment" user@example.com
if you have uuencode installed and mail, it works very well. Thanks anonymous!
Thursday, March 30, 2006
SLES 9 IO problem
There are some aphids in the garden right now.
I have an interesting problem with SLES9. I know it isn't my favorite linux distribution either, but it is pretty good for business that like support and it is a great java platform.
The interesting problem is with I/O. On a default install on various platforms, if you run a high I/O job the system becomes unresponsive. I believe it is a problem with the kernel version, because none of the other Linux distributions I've tried have it.
SLES9 is stuck in a heavily patched 2.6.5 kernel, and if you look through the kernel changelog, you'll see plenty of virtual memory and I/O improvements.
Today, I think I'll try a vanilla 2.6.12 or later kernel- because if my suspicions are right, somewhere between 2.6.5 and 2.6.12 the bug is fixed.
I have an interesting problem with SLES9. I know it isn't my favorite linux distribution either, but it is pretty good for business that like support and it is a great java platform.
The interesting problem is with I/O. On a default install on various platforms, if you run a high I/O job the system becomes unresponsive. I believe it is a problem with the kernel version, because none of the other Linux distributions I've tried have it.
SLES9 is stuck in a heavily patched 2.6.5 kernel, and if you look through the kernel changelog, you'll see plenty of virtual memory and I/O improvements.
Today, I think I'll try a vanilla 2.6.12 or later kernel- because if my suspicions are right, somewhere between 2.6.5 and 2.6.12 the bug is fixed.
Tuesday, March 28, 2006
Two laws that stifle the growth of technology in the US
These are the two worst laws for a gardener of silicon rust:
the DMCA ( http://en.wikipedia.org/wiki/DMCA )
and
the Patriot Act ( http://en.wikipedia.org/wiki/USA_PATRIOT_Act )
These laws do not encourage the things that brought the great innovations that the US is famous for like the Internet, Unix, ethernet, etc. They would inhibit a Bell Labs or a Xerox PARC or even the children of products from Xerox Parc (like Apple and 3Com and the like).
If you are a citizen and want to see the U.S. keep up with the world in technology, write you congress person and contribute to the EFF ( http://www.eff.org/ ). If you can, do more. If you can't anything will help.
the DMCA ( http://en.wikipedia.org/wiki/DMCA )
and
the Patriot Act ( http://en.wikipedia.org/wiki/USA_PATRIOT_Act )
These laws do not encourage the things that brought the great innovations that the US is famous for like the Internet, Unix, ethernet, etc. They would inhibit a Bell Labs or a Xerox PARC or even the children of products from Xerox Parc (like Apple and 3Com and the like).
If you are a citizen and want to see the U.S. keep up with the world in technology, write you congress person and contribute to the EFF ( http://www.eff.org/ ). If you can, do more. If you can't anything will help.
Subscribe to:
Posts (Atom)