Saturday, December 31, 2005

Slackware Linux



Dear readers,

It's obvious thing to say that I really like Slackware Linux. Why? that's because of its stablity and simplicity. You know KISS? Keep It Simple Stupid!. You want to know Linux, learn Slackware. You want to get headache, you learn Slackware. :D . But once you master it, nothing beats it. For rock-solid stability of Linux, I gotta switch to Slackware. Mandrake could not fit the requirement. It has something to do with apic thing that got conflict with power management feature. This results in a hanging server . The solution always to hard reboot (by pressing the reset button). Don't get me wrong. Mandrake is a good distribution too but sometimes it contains unnecessary bells and whistles. For some reasons, it fails to run properly on certain machines.

To fix this problem (and after it gave me a lot of headaches and stomachache :P), I gave Slackware a go. Now after 3 days, I never notice a single hang and no slow whatsoever. This brings to happy users and the most important a happy system admin :-) . You can try access my webmail here.

Happy New Year!

Tuesday, December 27, 2005

Mannheim - The Open Source city



The technology decision makers have already moved the majority of Mannheim's 120 servers to the open-source operating system. Next, they plan to shift its 3,500 desktops to the open-source productivity application OpenOffice.org, running on Linux. The migration should help the city with its aim of using programs that support open standards, which can be used by any software, whether closed source or open source. Some U.S. states--notably Massachusetts and local and national governments have been embracing standard file formats such as the OpenDocument format used by OpenOffice, a move that ensures that public documents won't be beholden to a particular proprietary program.

"We want to decide our IT strategy in Mannheim, and not have Microsoft make the decision for Mannheim," said Gerd Armbruster, the IT infrastructure manager at the German city.

"We want to decide our IT strategy in Mannheim, and not have Microsoft make the decision for Mannheim."

The city's IT department changed from Microsoft Exchange Server 2003 to Oracle Collaboration Suite because ODS supports open standards, even though it is proprietary software, Armbruster said. The switch to Linux was predominantly driven by the department's wish to use OpenLDAP, an open-source software package, rather than Microsoft's proprietary Active Directory, he added. On the desktop, the planned migration to OpenOffice was similarly driven by the city's desire to use OpenDocument, which Microsoft has said it will not support in its Office application. In September, the state of Massachusetts decided to standardize on desktop applications with OpenDocument, a move that has attracted controversy. The decision has come under fire from state officials. Last week, the Massachusetts governor's office said that it is "optimistic" that Microsoft's Office formats, once standardized, will meet the state guidelines for open formats.

In contrast to many other large-scale moves, the cost of the Linux shift was largely irrelevant in Mannheim's decision, Armbruster said. The city recently paid approximately 1 million euros (about $1.18 million) to Microsoft to migrate from Office 2000 to the 2003 version, but that was not important in internal discussions, Armbruster said.

"We never said to our mayor that if we switch to Linux, we won't need to pay 1 million euros to Microsoft," he said.

Although the city will save some money by switching to open-source desktops, it is likely to have to spend a considerable sum migrating desktop applications from Windows to Linux.

"We need to change 145 applications so they will work on Linux. This will cost millions of euros," Armbruster said.

Migrating those applications will not only take money, it will take time. Because of this, Mannheim's shift to Linux on the desktop is not due to start for five or six years. However, the move to OpenOffice on Microsoft Windows will begin next year, with the aim of putting the open-source productivity application on 3,500 desktops across 40 departments by 2009.

"The migration to OpenOffice has to end when support for Office 2003 ends, so we have about four or five years to complete the migration," Armbruster said.

Talk to customersThe infrastructure manager believes that one of the most important factors for a successful migration is acceptance by the people who actually use the software.

"It is important for me to have no resistance from users," he said.

It is so important that the Mannheim IT department is providing every city employee with copies of OpenOffice and Linux for their home PC and will even provide support for home users. The department is attempting to include those employees in the desktop migration project by arranging meetings where they can discuss their concerns. Armbruster thinks that the lack of user engagement is one of the main problems causing a delay in Munich's migration to open-source desktops.
"Most of the problems in Munich are due to resistance from users--they don't want to change to Linux," Armbruster said.

"It's important for an open-source project that you inform your users. You need to talk with users and speak about their problems."

In September, the city of Munich said that its switch to Linux for desktop computing would not get going until next year--one year later than planned and three years after it first announced its move to the open-source operating system. The IT department there is expecting to move 14,000 desktops from Windows NT 4.0 to Linux and from Microsoft Office 97 and 2000 to OpenOffice. Armbruster is confident that these kinds of delays won't happen with his city's migration.

"We haven't seen any resistance from users in the city of Mannheim. We have talked with department managers and power users and they accept our strategy to slowly move to Linux," he said.

"Most of the problems in Munich are due to resistance from users--they don't want to change to Linux."

The problems with Munich's switch encouraged Armbruster to publicize Mannheim's process, to show that an open-source migration can go more smoothly.

"Microsoft is probably very happy about the project in Munich because of its problems," he said. "One year ago, I didn't want to go public about our migration. I have now gone public because the project in Munich is not a success, but our project is. I wanted to say, 'Here is a city with about 6,000 employees where open source and open standards work already.'"

The OpenOffice migrationThe first stage in Mannheim's migration to OpenOffice, the evaluation of its Microsoft Office documents, started earlier this month. It is using a migration analysis tool called SCAI MAS to scan 500,000 administration documents and so identify which files cannot be automatically converted to OpenOffice.

"We expect that maybe 10 or 20 percent of documents will have problems when we move from Word to OpenOffice.org," Armbruster said.


Some of the macros contained within the Microsoft Office documents can be automatically converted into OpenOffice macros, but some will need to be reengineered. The evaluation project is due to be finished in mid-January, after which the IT department will start migrating the first departments to OpenOffice. It plans to switch over only two departments in the first year, one of which will be the IT department. Although some Mannheim employees will not have access to OpenOffice.org for a few years, they have already been using at least one open-source application for almost two years--the Firefox browser. Armbruster says the city has been using the Mozilla browser since version 0.8 came out in February 2004. Microsoft's Internet Explorer is not used for Internet access for "security reasons," he said.

"We want to move to Linux on the desktop when it has the same look and feel as Windows."

When Mannheim has finished its move to OpenOffice, it will start its migration to desktop Linux. This delay will not only give the city time to replace its 145 Windows-specific applications with programs that will run on Linux, but it should also ensure that the Linux desktop environment is more mature by the time Mannheim adopts it.

"In every new Linux version we see more Windows functionality," Armbruster said. "We want to move to Linux on the desktop when it has the same look and feel as Windows."

Armbruster did not say what version of Linux it plans on installing in the future, but he is a fan of Ubuntu, a free Linux distribution based on Debian. Ubuntu is the distribution that will be offered to city employees to try out at home, Armbruster said.

"I think Ubuntu is very interesting, more interesting than SuSE or Red Hat's desktop products," he said. "I have friends who wanted to try Linux at home, and when they installed SuSE or Red Hat, they had 500 or 800 programs. You don't need 800 programs; with Ubuntu you get fewer applications,"


Although other German cities echo Mannheim's view on the importance of open standards, many are reluctant to change, as they have only recently moved to proprietary technologies such as Active Directory, Armbruster said. There are other reasons why government agencies may find it hard to follow Mannheim's lead in adopting open standards. Mannheim is a long-term user of Unix, which has meant that the migration to Linux is easier for it than for bodies that predominantly use Microsoft software.

Cost is also likely to be a prohibitive factor for many government agencies. Mannheim's migration to Linux is expected to cost millions of euros. That short-term cost could be difficult to justify to senior management executives, who are unlikely to fully understand the need for open standards.
-- source : ZDnet

Sunday, December 25, 2005

dbmail - How to

First of all I would like to share with you all my experience setting up a *testing* mailserver using dbmail as the imap and postfix as the smtp server and of course on Linux. This time I used Slackware Linux 10.2 and this server is actually behind a firewall. For overview what is dbmail, please visit dbmail.org or read previous posting.

Requirements
You need all of these:

  1. mysql server ( I used 4.1.14. This version supports InnoDB)*
  2. dbmail package ( i used version 2.0.7 )
  3. Postfix ( I used version 2.2.7 )
  4. DBMail source (get the latest from dbmail.org)
* Since some DBMail tables can get VERY large (depending on your mailusage) we advise using InnoDB as database storage backend.

Let's get dirty!
Make sure mysql is running. First you'll need to create the DBMail database in MYSQL. You can do this by issueing the following command. This step is only necessary when you do not have a database for DBMail yet. Note that you will be prompted for the MySQL root password.

mysqladmin create dbmail -u root -p

This creates a database with the name "dbmail". Now you have to give a non-root user access to this database. Start the MySQL command-line client as root:

mysql -u root -p

and enter the following command:

GRANT ALL ON dbmail.* to dbmail@localhost identified by ''

Where should be replaced with the password you want for the dbmail user. After this step, the database is ready to be used by the dbmail user. The next step is the creation of the database tables used by DBMail. Log out of the MySQL client and run the following command from the command line. You will have to enter the password you set in the previous step.

mysql -u dbmail dbmail -p <>Copy the dbmail.conf file to /etc and edit the dbmail.conf file and set everything in there to your likeings. Make sure to set your database name, user and host are configured in dbmail.conf. Other options in the configuration file are documented there.

Run configure & make
Run the configure script. This script uses pg_config or mysql_config (depending on --with-mysql or --with-pgsql) to detect where the libraries and include files for these databases are. e.g. when working with PostgreSQL, this is the configure command:

./configure --with-pgsql

For MySQL,

./configure --with-mysql

After running configure, 'make all' will build the executables. Running 'make install' will install the executables in /usr/local/sbin.

Next you will need to create some users into the dbmail mailing sytem. Currently this can be done in two ways. One way is using the dbmail-users utility. The other way is doing it in the database itself. To do it using the dbmail-users utility and do the following:

dbmail-users -a -w -g -m [-s aliases]

clientid can be left 0 (this is if you want certain mailadministrators administer specific groups of mailusers). maxmail is the maximum number of bytes this user may have in his/her mailboxes. 0 is unlimited. Add K or M for kilobytes and megabytes. Aliases are a number of aliases for this user. @domain are domain aliases.

A user always needs to have at least one alias to receive mail, unless the users username is something like foo@bar.org, where bar.org is a domain the mailserver deliveres to.

example:

./dbmail-users -a zamri -w puttycat -g mail -m 25M -s zamri@dude.org zamri@dude.net @net.com

This will create a user zamri, with a password puttycat. It will set zamri's maillimit 25 Mb and all mail for zamri@dude.org, zamri@dude.net and @net.com will be sent to john. The @net.com is a fallback alias. This means that all mail that cannot be delivered to an existing alias for a @net.com address will sent to zamri.

Now for the postfix, i have to add this line in /etc/postfix/master.cf

dbmail-lmtp unix - - n - - lmtp

If you want verbose output in the mail log, add -v to lmtp, like this:

dbmail-lmtp unix - - n - - lmtp -v

Note : This is good for troubleshooting. Don't underestimate it but please note that you get a LOT of output in your logs when using this setting.

Now edit main.cf and add / change the mailbox_transport directive to:

mailbox_transport = dbmail-lmtp:localhost:24 ^
local_transport = dbmail-lmtp:localhost:24 *

*Note: This one to make sure local mail delivery to dbmail.
^Note: This is the step to make sure the mails use dbmail's transport but not including local delivery.

And then set this :

local_recipient_maps =

Note : This step is *VERY* important or mails from outside can not reach your mailbox. See INSTALL.postfix in source for more info.

Afterthat run this commands:

postmap /etc/postfix/transport
postfix reload

Run the dbmail's servers:

dbmail-lmtpd
dbmail-imapd

Make sure postfix and MySQL (or PostgreSQL) are running. Try sending and replying to local users first and if successful, you can then try the same to outside users.

Thursday, December 22, 2005

dbmail - The high performance mail server



DBMail is a mail system that stores mails into a database including attachments. I really like this idea when I first read about it on the net. The first thing came into my mind was speed. Traditional filesystem can't beat the speed of SQL query especially when we deal with thousands of users accessing millions of mails. That said, the time of retrieving, storing and searching of mails can be reduced significantly. One system that can compete with this is Cyrus. Cyrus is also conceptually the same as DBMail but it uses different database backend.

I am now struggling setting up my mail server based on DBMail and I use SquirrelMail as the frontend to login and access mails. Things go wrong somewhere and it's been 2 days now. I just can't receive mails for now. Login works ok. I hope I can complete it by next week.

Don't you think learning new technology is fun?

Sunday, December 4, 2005

How to setup USB Scroll Mouse in Linux



This is my experience in setting up my mouse (USB, scroll mouse) in my new Slackware Linux box. On standard installation, Slack installer did a good job of detecting my USB mouse but did not properly configure it for the scroll wheel. So, I just couldn't use the wheel to scroll down. On my Mandrake system, it did detect my USB mouse and the configured the scroll wheel automagically. This is what I need to add in my /etc/X11/xorg.conf in inputdevice section:

option "Protocol" "IMPS/2"
option "ZAxisMapping" "4 5"
option "Buttons" "5"

Make sure the Protocol is IMPS/2 because protocol PS/2 does not support scroll wheel. Happy scrolling!...

Friday, December 2, 2005

Gmail now with antivirus scanner


Gmail just launched its new feature for gmail users : virus scanner for attachment. The feature works this way:
  1. Each time we send and receive attachments , gmail will scan them for viruses.
  2. If it is found in an attachment we've received, the system will attempt to remove it, or clean the file, so we can still access the information it contains.
  3. If the virus can't be removed from the file, we won't be able to download it.
  4. If a virus is found in an attachment we're trying to send, we won't be able to send the message until we remove the attachment.
Although I never encountered any viruses attached to emails sent to my gmail's account (maybe they were running some sort of virus scanner before to filter attachments to /dev/null :) ), I still think this is important feature for gmail's users because some users don't know that their files has been infected by virus(es) and then they send it to their friends without knowing it.

Thursday, December 1, 2005

WiMax - The next generation of wireless technology

Wireless techology has become common these days. In Malaysia, ISPs has been introducing this technlogy for many years now. TMNet's HotSpot, JARING's wireless broadband and TIME's webbit and TIMEZONE. I've never bother about this technology since I don't have the equipment and i don't involve in setting up one. But when my company decided to buy ISDN line (one year ago before changing back to TMNet's Streamyx), the vendor gave us a free wireless access point (AP). Then, my involvement in setting up and learning this technology began. The most important point for me was "to get the idea how it works, know the system and how to implement it" . Then I realized many things and threw lots of "000000 I see" words. :)

Recently I've read about WiFi technology and its history. Many people think that wireless is WiFi. This is not true. There are many types of wireless network connection. Commonly used are :
  1. 802.11a (speed - 54 Mbps, FQ band - 5 GHz)
  2. 802.11b (speed - 11 Mbps, FQ band - 2.4 GHz) -- known as WiFi
  3. 802.11g (speed - 54 Mbps, FQ band - 2.4 Ghz)
As you can see above, the 802.11b type wireless network connection is the one known as WiFi. We usually use certain words loosely. For many of us, 802.11 is WiFi. :) Ok. Let's take a look at the next generation of wireless connection.

One of the most talked about developments for next generation wireless broadband deployment is a technology commonly referred to as WiMAX. A recent report issued to Congress by the Federal Communications Commission concluded that WiMAX “has the potential to alter and further accelerate the evolution of broadband services.”

As the next evolutionary step of its WiFi predecessor, WiMAX is being touted as an easily deployable “third pipe” that will deliver both flexible and affordable last-mile broadband access to millions. Many believe that WiMAX will do for broadband access what cellular phones did for telephones: connect users directly to the Internet from anywhere within a major metropolitan area.

What is WiMAX anyway?

WiMAX, short for Worldwide Interoperability for Microwave Access, is a wireless standard developed by a working group of the Institute of Electrical and Electronics Engineers (IEEE). The standards developed by the IEEE form the foundation for nearly all data communication systems and apply to coaxial, copper and fiber optic cables.


The IEEE sought to establish a more robust broadband wireless access (BWA) technology through its 802.16/WiMAX standard. It released its first 802.16 standard in December 2001 which addressed systems operating between 10 GHz and 66 GHz.

IEEE 802.16 addresses the "first-mile/last-mile" connection in wireless metropolitan area networks. It focuses on the efficient use of bandwidth between 10 and 66 GHz (the 2 to 11 GHz region with PMP and optional Mesh topologies by the end of 2002) and defines a medium access control (MAC) layer that supports multiple physical layer specifications customized for the frequency band of use.

WiMAX is a wireless metropolitan area network (MAN) technology that can connect IEEE 802.11 (Wi-Fi) hotspots with each other and to other parts of the Internet and provide a wireless alternative to cable and DSL for last mile (last km) broadband access. IEEE 802.16 provides up to 50 km (31 miles) of linear service area range and allows connectivity between users without a direct line of sight. Note that this should not be taken to mean that users 50 km (31 miles) away without line of sight will have connectivity. Practical limits from real world tests seem to be around "3 to 5 miles" (5 to 8 kilometers). The technology has been claimed to provide shared data rates up to 70 Mbit/s, which, according to WiMAX proponents, is enough bandwidth to simultaneously support more than 60 businesses with T1-type connectivity and well over a thousand homes at 1Mbit/s DSL-level connectivity. Real world tests, however, show practical maximum data rates between 500kbit/s and 2 Mbit/s, depending on conditions at a given site.


WiMAX in Malaysia

In my country, a company named AirZed is the pioneer in this new territory. According to Paul Tan, as of June 2005, AirZed’s WiMAX service covers Mid Valley, Damansara, Petaling Jaya and Shah Alam. I don't know which area they have covered lately but for people in Kuala Lumpur and nearby, they don't have to worry because "all the latest and greatest technology will reach you sooner before others" :P. Prices start at RM188 a month for the Home package which offers 1Mbps download and 128kbps upload with a dynamic IP. The SOHO package is RM288 a month and it offers 1Mbps download and 384kbps upload, but with dynamic IP as well. The Business package is RM468 a month offering 1Mbps download and 512kbps upload. It also comes with a fixed IP and 6 free Airzed Wi-fi Hotspot accounts. As usual, once pioneer has started, others will follow sooner or later. Other companies especially big ISPs are eyeing this new technology.





Wednesday, November 30, 2005

Firefox 1.5 released


Firefox 1.5 has been released after the 1-year wait from the last major software update, Firefox 1.0. It sports a new rendering engine (Gecko 1.8) as well as hundreds of other software improvements. Important changes include faster page loading, the ability to reorder tabs, faster back and forward buttons, a feature to clear personal data, improved accessibility and popup blocking, and SVG, CSS 2 and CSS 3, and JavaScript 1.6 support. You can download it here.

Firefox has attracted attention as an alternative to other browsers such as Microsoft Internet Explorer. As of September 2005, estimates suggest that Firefox's usage share is around 7.6% of overall browser usage (see market adoption section). Since its release, Firefox has slightly reduced Internet Explorer's dominant usage share.

Monday, November 28, 2005

It's raining cats and dogs

It's monsoon season here in East Coast of Peninsular Malaysia. When I'm thinking of rain, I'm thinking of an idiom it's raining cats and dogs to describe heavy rain. What I'm thinking was (at that time) , a group of cats and dogs, quarelling while they were falling off the roof. Funny isn't it? Well idioms are full of funny (and crazy?) synonyms but not all of them. I heard one of them since my secondary school years, hustle bustle. Then, my teacher said "It's too english for you!". All of us didn't get the correct answer for that particular fill in the blank question. Oh well. Now as a worker talking and writing in English for most part of my work, I still rarely use lots of them.

Wednesday, November 16, 2005

I can't take my eyes off you (relax a bit)



Nor yet were you seen
neither seen nor heard
when this earth was made
when the sky was built...

- Kalevala 3:245-248

He looked up as he plodded
Down 42nd Street,
So many nameless faces
All shuffling to some beat.

So many anonymities,
But then, to his surprise,
He suddenly made contact
With a pair of hazel eyes.

She looked into his soul's windows
And didn't look away.
A wry smile grew inside his mind;
"All right," he thought, "I'll play."

He gazed back at her placidly,
His cold blue eyes grew warm.
And only then did he glance down
To admire her female form.

As she passed him by that day,
He hazarded a grin.
She kept her lovely eyes on his
But still,
to his chagrin,
The woman walked away from him,
Without reciprocating.

He chuckled silently,
knowing
That other eyes were waiting.


-- courtesy : AndersenSilva.com

Wednesday, October 12, 2005

Linux Advanced Routing And Traffic Control (LARTC)

Introduction

Networking in Linux is one of essential part for the success of this operating system. The flexibility and robustness are the key point for the success. However, the user-friendliness, at the very early stage, was not good which resembles the old Unix. Nowadays, many modern Linux distros come with good interface on setting up many aspects of networking stuff and many things can be configured automatically when the hardwares detected.

Many organizations need to have an advanced routing for their network infrastructure. Basic network infra cannot cope with certain conditions. This is when the advanced routing comes into play. In Linux, we have iproute2 package to work hand-in-hand with iptables and recent kernel for advanced routing. This topic is thoroughly covered on LARTC home page at http://www.lartc.org. My article here just covers basic things.

Make it work

Let's take a look at this scenario :

Scenario 1

We want to route packets that come from local network(s) to two different or two same ISPs. Say the two ISPs are tm1 and tm2 with the associated IP respectively (see above diagram --deleted. Will update soon! - 16/11/2005).

Our work is in the router box. Login as root and set two tables:

echo 1 tm1 >> /etc/iproute2/rt_tables echo 2 tm2 >> /etc/iproute2/rt_tables

These commands will put 2 new table entries in file rt_tables. The content of the file after previous commands :

255 local
254 main
253 default
0 unspec
1 tm1
2 tm2

Now we have 3 routing tables :
main
tm1
tm2

The next step is to populate the routing rules to the tables:

tm1 table

~# ip route add default via 10.1.1.1 dev eth1 table tm1
~#ip rule add from 192.168.0.0/24 table tm1

**Explaination
The packets that come from 192.168.0.0/24 will go to tm1 routing table and then will be passed to the tm1 gateway (default route) which is 10.1.1.1 on device eth1

tm2 table

~#ip route add default via 10.8.8.1 dev eth2 table tm2
~#ip rule add from 192.168.1.0/24 table tm2

**Explaination
The packets that come from 192.168.1.0/24 will go to tm2 routing table and then will be passed to the tm2 gateway (default route) which is 10.8.8.1 on device eth2

To see the routing tables after previous commands :

To see tm1 table:

~#ip route show table tm1

To see tm2 table:

~#ip route show table tm2

To see main table:

~#ip route show table main

Please note that this kind of routing can't be done without iproute2 package.Please make sure that this package is installed first with your distro utility.On Mandrake, this can be done with urmpi as simple as urpmi -v iproute2.


let say that you want to route packets based on their destination ports. You can do this with the help of iptables. To mark the packets that have the 22 and 80 as destination port, we will use the mangle table as below :

~#iptables -t mangle -A PREROUTING -i eth0 -p tcp --dport 80 -j MARK --set-mark 1
~#iptables -t mangle -A PREROUTING -i eth0 -p tcp --dport 22 -j MARK --set-mark 2

Let say you want to separate their route based on the destination port:

All packets with destination port 80 will go out via table tm1

~#ip route add default via 10.1.1.1 dev eth1 table tm1
~#ip rule add from all fwmark 1 table tm1


All packets with destination port 22 will go out via table tm2

~#ip route add default via 10.8.8.1 dev eth2 table tm2
~#ip rule add from all fwmark 2 table tm2

All packets with destination port 22 will go out via table tm2

Happy experimenting!

Saturday, September 17, 2005

How to make your mailserver faster ?


My mailserver was very slow when more than 10 users accessing it simultaneously. It uses squirrelmail as the front-end for webmail. As you know (or maybe don't know :P), squirrelmail is built using php and an administrator can extend its functionality via plugins. There's many good plugins to ease administrator's chore and also for users. php on a good side is a very powerful language but it is suffer when it has to open and execute many files. It has to recompile everytime the file is to be executed. This results in slower response because it has to do the same thing over and over again. But thank god I just found the solution. The solution is simple. Everytime the php file is compiled, the resulting binary is used whenever the file is fetched. This results in faster execution time and faster response. The term for this solution is php accelerator. There's many php accelerator out there but the one that gives large boost performance is eaccelerator formerly know as Turck MMCache. It is claimed as the fastest php accelerator out there. For time being I have installed Turck MMCache (the old version of eaccelerator) for I haven't found any Mandrake package of eaccelerator yet. This is what I installed on my mailserver box to make it faster.

1. php accelerator (Turck MMCache)
To speed up php execution.

2. caching-nameserver (BIND 9)
To speed up dns resolution.

3. imapproxy (http://www.imapproxy.org) -- updated : Sept 20,2005
To speed up connection to imap server by caching it.

4. chmod 1777 /var/spool/mail -- updated : Oct 18,2005
Speed up Squirrelmail access to mail and for some distro like Mandrake 10.x, it can prevent crash. (only for those who use mbox-style mail format)

Clever huh? If you know the theory of the details on how networked boxen connect and talk with each other, you also should know how to make it faster.

Cheers!

Monday, August 29, 2005

Merdeka day celebration

Cherating -- 2 days before National Day.

31st August every year, Malaysian celebrates "Merdeka Day" or National Day. On National Day eve, I was still at my office doing my work. repairing..downloading.. configuring..installing and testing. What a day for me. Never knew to rest at home or joining my friends celebrating the Merdeka Day. Not that I hadn't had a clue or had experience to celebrate it but I just didn't give a damn. Why? because I prefered to celebrate it in cyber world. Wired and sometimes wireless.

I had experienced it before. During my study years at Penang. Those days were unforgettable. We gathered at a field near Padang Kota Lama and waiting for the countdown. 10..9..8....3..2..1 and firework started. What did we get from a celebration like this? Is this the best way to celebrate? Many people complaining about the raising of oil price and many suffers from it. How can we waste money for the firework? Whose money is that? Can you tell me?

Thursday, August 18, 2005

Ethernet bonding revisited


Ethernet bonding

2 days ago I was able to do ethernet bonding for my Linux box which acts as a proxy server and firewall. What I did was to bond two private network interfaces to act as one. What I need to do was installing 2 NICs and setting the driver (autodetect on modern distros). Then I need to create the bonding interface like this :

~#modprobe bonding mode=0 miimon=100

~#ifconfig bond0 192.168.0.2 netmask 255.255.255.0 up

~#ifenslave bond0 eth0

~#ifenslave bond0 eth1

Note:

bond0 is the "bond" interface. eth0 and eth1 are the interfaces to be bonded so called slave devices (or interfaces).They are connected to my private network (LAN). The bond interface will have MAC address of the first slave device to be its MAC address. Please take note that the mode=0 means that we want load-balancing capability. You can use this bond interface in iptables command or tcpdump. In my network environment, I use this command to the bond0 interface:

~#iptables -t nat -A PREROUTING -s 192.168.0.0/24 -d 0/0 -i bond0 -p tcp --dport 80 -j REDIRECT --to-port 3128

where squid is running on port 3128. If not, change it accordingly. For tcpdump :

~#tcpdump -i bond0

I wish you all good luck and happy bonding! :-)

Monday, August 15, 2005

Monday sickness, chatting, Xandross

If today is not Monday, I would have been in Pangkor ... swimming..



Today is Monday 15 August 2005. At 8.30 AM this morning I was still at my home preparing for work. Last night, I was happily playing with my student's laptop by installing Linux, did some modification to KDE theme and testing Codeweaver's CrossOver. My life was not connected for the whole Sunday. Why? My hand phone was broken (again?) and I have to send it to my friend's shop (again?). He laughed at me and offered me a better handphone as a replacement but for RM50. I said "Thanks but no thanks".

My life has changed since I knew how to chat. I have always thought that chatting is a waste of time. Actually I just forgot that I have to think the good side of it and get rid of the bad. I never liked chatting as i did now. My life was complete i guess since communication over internet primarily by exchanging emails and via forums. Now I feel incomplete without double-clicking on the Yahoo! Messenger when I first start working early in the morning. I have known a few friends via YM and this real time conversation can be so fun and exciting by exchanging ideas and knowledge.

Now Monday comes again and life must go on. I was using Xandros Linux on my student's laptop and typing in this blog. Searching for knowledge and doing practical work is my passion. In computer world, you'll never understand something completely until you do it. As my job as a system administrator, i have gained so much knowledge and experience that I can't describe here. All I can say is it is "fantastic"and sometimes plastic. :-)

Thursday, July 28, 2005

Setting up transparent proxy server

Hi all,

Today, while setting up ip for my internal network, i found out that i have run out of IPs and the internet access was very slow. I ran into a situation called "bottleneck". A situation where a road becomes narrow with heavy traffic. How to speed up this? The answer is proxy server.

On with the theory
Proxy server is a server that can cache visited web pages. Dynamic web pages are not cached. When a client access a website, the proxy server , on behalf of the client access the website and cache it. the next the client or other client wants to connect to the site, the proxy server just give the cached site to the client. Thus reducing the response time from the actual site.

Transparent proxy
In a normal proxy case, you have to set manually for each client to connect to outside. It is not a practical solution if you have a lot of workstations + many apps to connect to the internet. What is more practical solution? The answer is "transparent proxy" and now iptables comes into play.

What you have to do first?
1. Setup a server
OS : Linux (whatever flavor you want)
proxy server : Squid (install the latest one)
utilities : netfilter packages (for iptables)

Squid.conf
Your squid.conf location is dependent on how you install squid package. If u use source code and compile it without tweaking ./configure options, meaning it is in /usr/local/squid/etc. If you use your package manager, it is in /etc. Wherever it is, you have to edit it before you can use it as a transparent proxy.

What to edit
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
acl lan src 192.168.1.1 192.168.2.0/24
http_access allow localhost
http_access allow lan

Please change "lan" to suit your network environment. This file is heavily documented. Please read the comments before you change anything unless you know what you're doing.

I don't want to explain in detail on how to setup linux for your server. Please consult your spesific Linux distribution HOWTOs and FAQs. After you have complete setting up Linux, you should setup SQUID. More information on squid, pls visit http://www.squid-cache.org. squid usually readily packaged for your distro. You should check that first whether you can just install it from CD. If not, you have to download from the link above.

After you have edited squid.conf, this is the iptables command you should run on the proxy server.

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
provided that your proxy server is using port 3128. If not, change it accordingly.

Tuesday, July 19, 2005

Setting up virus scanner for mail server

This task is quite simple and the tools are readily available on the internet. In the spirit of Open Source, may of the tools are made free and come with source code. Binary version for may distros are available too. It's only a matter of choice and how we're going to set things up.

Antivirus
Clam antivirus is one of the best antiviri around. So far she can detect almost 97% of Windows viri and worms. Those viri are always reached our computers over network and internet. The main medium is email. I personally got in average 5 to 10 emails containing virus. That's why, antivirus is really important these days.

Trashscan
This is a script invoked by procmail to scan and send a notice to the sender if the mail contains virus. trasscan comes with clamav package.

Setting up procmailrc for scanning

#
# procmail configuration for TrashScan:
# ZapCoded by Trashware; 13.10.2002
#

# [ ... ]

# ----------------------------------------------------------------------------- #
# Virus scan section ... #
# ----------------------------------------------------------------------------- #

# 1. Run TrashScan
:0
* multipart
* !^X-Virus-Scan:
| /usr/local/sbin/trashscan


# 2. Filter tagged virus mails
:0:
* ^X-Virus-Scan: Suspicious
/dev/null

The last line will delete the mail containing virus.

Sunday, July 17, 2005

Setting up Spam Detection System for mail server

For over a month, I did a research on how to install, configure and test spam detection system on mail server which i manage. Here's the quick step :

1. Install spamassassin from spamassassin.org. I use spamc and spamd and not the perl version. Advantage: faster for bz server.

2. Make this setting in /etc/mail/spamassassin/local.conf



# SpamAssassin user preferences file.
# See 'man Mail::SpamAssassin::Conf' for
# details of what can be tweaked.

# score needed to deem an email to be spam.
# the lower the score, the more likely the email
# will be classified as spam. default is 5, but
# I have found that 4 works a little bit better
required_hits 4

# if you find an email from an address classified
# as spam that should
# *never* be classified as spam, add it to the whitelist
whitelist_from *@mp3.com

# if you receive an email from an address that will
# always be spam, add it to the blacklist (comma separated)
blacklist_from big@boss.com

# Whitelist and blacklist addresses are now
# file-glob-style patterns, so
# "friend@somewhere.com", "*@isp.com",
# or "*.domain.net" will all work.

# append the subject line with "[SPAM]"
# if you do not want the subject line altered,
# just remove this line
subject_tag [SPAM]

3. Make sure procmail is installed. use 'which procmail' to know. if not, go to www.procmail.org to download and install.

4. Test for one user first. configure .procmailrc in one user home directory like this:


## Set to yes when debugging

VERBOSE=no


## Put '#' before LOGFILE if you want
# no logging (not recommended)
LOGFILE=procmaillog



:0fw: spamassassin.lock
/usr/bin/spamc

# The following three lines move messages tagged
# as spam to a folder called "spam-folder" If you
# want mail to stay in your inbox, just
# delete the lines

:0:
* ^X-Spam-Status: Yes
spam-folder



Update :
You could replace spam-folder with /dev/null if you want the spam mails to be automatically deleted as below (Not Recommended):

:0:
* ^X-Spam-Status: Yes
/dev/null


Note : The 3 last lines is important for automatic moving of spam mails to 'spam-folder'. Spam mails are marked [SPAM] in their subject and this mark is user-definable.

Ok. That's all there is to it. Test it for a month and hope that spam mails ever reach the spam-folder. :-) I bet they will.

Wednesday, July 6, 2005

Internet is ready

yesterday i managed to move the servers from old building to the new building. here are the pics just taken before I wrote this blog.

Technician Room


Computers are ready to be serviced!

Sunday, July 3, 2005

No network huh?

Since 2 weeks ago, the new building was waiting for me to put all my stuff there but I just couldn't. Why? because the internet is not there yet. How can I work without it? My work needs an internet connection. So what should I do? Not moving!. and it is worth cause the UITM registration tomorrow needs internet connection to register the new students online. If I did move, I have to carry all the servers and routers and swtches to the old building. Actually I was waiting for TMNet personal to call me about the line at the new building yesterday but no one call.

Now I'm at office chatting and sms'ing my friend. One is my ex-student and the other one is my chatting friend. Network at the new building is still not complete yet. Lab 1 and 2 will be completed in a day or 2. I can't wait to keep my hands dirty configuring and troubleshhoting the new network. I noticed yesterday that the new switches were all 3com.

Thursday, June 23, 2005

Busy day 2

Huh? I can't install Oracle free edition in lab? Yeah it's true coz it will break the policy. Now I understand. Dr. Janet instructed me not to do it. But the student can install it at home. She wanted to discuss this issue further with her colleagues back in Australia on how to get Oracle free or with discounted price for educational institution. She said that Oracle Malaysia is nasty. huh?

Day 2 meeting revolved around various subjects will be taught and softwares required. I was surprised to hear that for Computer Graphics subject, Java Swing Package will be used throughout the course. Students are expected to develop Computer Graphics applications with it. I have learned it for quite some time and it is quite difficult. They have to remember many Swing APIs and the RAM should be higher than 256 MB to run it smoothly as Java programs will chew up your memory in an instant.

Wednesday, June 22, 2005

A busy day for a lazy man

Today is so full of meeting from nine to 5. meeting packed day. I have a meeting with Flinders University regarding courses ShahPutra want to offered here. They have strict requirements regarding students intake, softwares and technology that we have here.
"We are very Java" - said Dr. Janet, the IT Head
The course will be offered is Bac. Of Comp. Sc And IT. I, as an system administrator should provide them with softwares required to run the program such as Java, C/C++, A web server (Apache), JDK 1.4, J2EE, Linux OS, Oracle and also MySQL. Thank god I can download the free edition of Oracle on Oracle website. That will save cost as our boss like. Students can run Oracle server on their own PCs and do their work.

Last night I read the RAID manual for the 10th times. I realized that for RAID 5, I should make RAID 1 first and booting can be done from it. I have three disks that I could make RAID 5. It is no automatic process for this as many installers do not support configuring RAID 5 on first setup.

Slackware is still my Linux distro of choice although many of my friends run some sort of modern distro like Xandros, Mandrake (now Mandriva) and Fedora. I just think that it is easy for me since i first started using Linux with it. I learned many essential things on Linux by using Slackware and also my friends told me that it is similar but not the same as BSD family OS.

Tuesday, June 14, 2005

Williams F1 gives green light to Linux

HP Linux supercomputer aids aerodynamics modelling for team cars
Peter Williams, vnunet.com 02 Oct 2003

The BMW Williams Formula 1 (F1) team has dramatically improved its high-resolution aerodynamic modelling of team cars by introducing an HP Linux supercomputer cluster.

The company, which currently lies second in this year's F1 Constructors' Championship, has added "several hundred" HP ProLiant Intel-based servers to its Oxfordshire headquarters.

Apart from driver skill, competitive advantage can be achieved through tyres, engine power, chassis and aerodynamics, with the latter the most difficult to control.

"Last year showed us that our chassis was a model of reliability but that there's still room for improvement, particularly on the aerodynamic front," said Patrick Head, technical director at Williams.

"[This] has given us the necessary technological leadership and expertise to design the revolutionary FW25 [this year's F1 car] rather than a modified version of last season's FW24."

A spokesman for Williams F1 added that the main driver behind the decision to expand the team's supercomputing resource was the need to reduce the time taken to perform a complete analysis for a given size of model.

The team then selected the technology that gave it the biggest reduction in total analysis time within its budget.

"We performed a number of benchmarking tests using typical models and selected the Linux cluster from the results of these tests," the spokesman told vnunet.com.

"The main benefit we have gained is indeed in time: we are now running a complete analysis on large models overnight, so that engineers can send a job to run in the evening and then have the results available in the following morning.

"This shortens our overall time to produce a design iteration, which means we can bring performance to our car more quickly."

Williams F1 is now studying the scalability of the Linux cluster and other solutions in order to understand how it can increase model sizes while continuing to get solutions to run overnight, "so that we can get more precision in our analysis without sacrificing any time", the spokesman said.

Tim Bush, engineering manager for HP EMEA, who was responsible for the installation on HP's side, added: "There was genuine surprise at the performance and its impact on the design cycle from the Williams F1 personnel."

Linux is popular for exploiting off-the-shelf applications that require heavy compute capability, while extra performance had come through using a very high-bandwidth, low-latency processor interconnection, explained Bush.

The result, according to Williams F1, has been a threefold enhancement of its simulation capabilities through more detailed computational fluid dynamics simulations.

This halved design, development and testing time has also provided more capacity to experiment with new car design concepts.

The multi-rack Intel-Xeon processor-based system, which is controlled through a head node HP Integrity (Itanium 64-bit) server, was delivered preconfigured in May.

HP is a major BMW Williams F1 team sponsor and technology partner, and the system is attached to a central HP storage area network which runs to many terabytes.

There are also two ruggedised race systems, one travelling with the team and the second with the test team, and crash and structural testing systems running on HP-UX (Unix).

Source : vnunet.com

Linux revs up Renault F1 testing

Linux revs up Renault F1 testing

IBM e1350 Linux cluster cuts simulation time from three weeks to 18 hours
Robert Jaques, vnunet.com 15 Mar 2005

Linux has helped the Renault Formula 1 team to slash its engine and chassis computational analysis time by 90 per cent.

The team dramatically cut the time it spends performing computational analysis, and ultimately reduced development costs, by deploying an IBM e1350 cluster running Linux. The system is based on IBM eServer e325 and e326 servers with AMD Opteron processors.

In addition, Renault Formula 1 has deployed two IBM eServer pSeries 630s running AIX, and one TotalStorage DS 4300 storage server, to capture and store the computational data generated by the e1350 cluster.

Christophe Verdier, Renault F1 Team IT director, said: "The performance of this system has enabled Renault F1 to fully optimise its V10 engine capability. This has given the team a considerable advantage, since a race is as much won in the factory as it is on the track.

"With xSeries and Linux we have a high-performance, low-cost cluster that will help us to run ever more accurate simulations. With 12 IBM e325 servers working in parallel, a simulation operation which can take three weeks is now completed in 18 hours."

-- source : IT Week

Firefox Browser Adoption Rates Slow

Open-source Web browser Firefox is continuing its spread across Europe, although the pace of adoption has slowed somewhat, according to a recently released report. French Web metrics company XiTi noted that Firefox now accounts for about 14 percent of browsers in Europe, up from 13 percent in April and 11 percent in March. The numbers fall in line with U.S. usage, which is approximately 13 percent. Finland shows the largest Firefox use, with over 30 percent of Web users employing the browser. The country is followed by Germany, with over 24 percent, and Hungary, with about 22 percent. At the bottom of the adoption list were Luxembourg, with 10 percent, Lithuania, at 7 percent, and Monaco, with 6 percent.

Slow Down
Mirroring a trend seen in the U.S., Europe's Firefox adoption rate appears to be slowing. Analysts point to several possible factors for the trend, including recent security flaws involving cross-site scripting and remote system access vulnerabilities. The flaws were rated as "extremely critical" by security company Secunia .A more likely explanation for the adoption slowdown, though, simply might be saturation. The browser's expected adopters have downloaded Firefox already, making new growth dependent on slower-adoption strategies like word of mouth and advertising. Despite the lower growth rates, the Mozilla Foundation is emphasizing the strength of Firefox in Europe as well as other parts of the world. Currently, the browser has been downloaded over 62 million times.

Steady Growth
Now that Firefox has gotten off to a blazing start, many analysts have predicted that it will grow steadily but more slowly in the future. In Europe, where Linux is being employed in several high-profile projects and government offices, Firefox should continue to have momentum for some time to come, said AMR Research analyst Paul Kirby.

"Europe has shown that it embraces open source," he said. "So it wouldn't be surprising to see Firefox be popular in many countries where open source has taken hold."

-- source : http://www.newsfactor.com/story.xhtml?story_id=36132

Monday, June 6, 2005

RAID - Revisited

I'm still researching about RAID. I have done software RAID before and am looking for some hints on hardware RAID and the advantageous over software-based RAID. Let's get some basic picture of RAID and beyond.

What does RAID stands for ?
In 1987, Patterson, Gibson and Katz at the University of California Berkeley, published a paper entitled "A Case for Redundant Arrays of Inexpensive Disks (RAID)" . This paper described various types of disk arrays, referred to by the acronym RAID. The basic idea of RAID was to combine multiple small, inexpensive disk drives into an array of disk drives which yields performance exceeding that of a Single Large Expensive Drive (SLED). Additionally, this array of drives appears to the computer as a single logical storage unit or drive.

The Mean Time Between Failure (MTBF) of the array will be equal to the MTBF of an individual drive, divided by the number of drives in the array. Because of this, the MTBF of an array of drives would be too low for many application requirements. However, disk arrays can be made fault-tolerant by redundantly storing information in various ways. Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley paper, each providing disk fault-tolerance and each offering different trade-offs in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID-0 array.

The different RAID levels

RAID-0
RAID Level 0 is not redundant, hence does not truly fit the "RAID" acronym. In level 0, data is split across drives, resulting in higher data throughput. Since no redundant information is stored, performance is very good, but the failure of any disk in the array results in data loss. This level is commonly referred to as striping.
Two disks
Three disks
RAID-1
RAID Level 1 provides redundancy by writing all data to two or more drives. The performance of a level 1 array tends to be faster on reads and slower on writes compared to a single drive, but if either drive fails, no data is lost. This is a good entry-level redundant system, since only two drives are required; however, since one drive is used to store a duplicate of the data, the cost per megabyte is high. This level is commonly referred to as mirroring.
RAID-1 (mirroring)

RAID-2
RAID Level 2, which uses Hamming error correction codes, is intended for use with drives which do not have built-in error detection. All SCSI drives support built-in error detection, so this level is of little use when using SCSI drives.

RAID-3
RAID Level 3 stripes data at a byte level across several drives, with parity stored on one drive. It is otherwise similar to level 4. Byte-level striping requires hardware support for efficient use.

RAID-4
RAID Level 4 stripes data at a block level across several drives, with parity stored on one drive. The parity information allows recovery from the failure of any single drive. The performance of a level 4 array is very good for reads (the same as level 0). Writes, however, require that parity data be updated each time. This slows small random writes, in particular, though large writes or sequential writes are fairly fast. Because only one drive in the array stores redundant data, the cost per megabyte of a level 4 array can be fairly low.

RAID-5
RAID Level 5 is similar to level 4, but distributes parity among the drives. This can speed small writes in multiprocessing systems, since the parity disk does not become a bottleneck. Because parity data must be skipped on each drive during reads, however, the performance for reads tends to be considerably lower than a level 4 array. The cost per megabyte is the same as for level 4.


Summary:
RAID-0 is the fastest and most efficient array type but offers no fault-tolerance.
RAID-1 is the array of choice for performance-critical, fault-tolerant environments.In addition, RAID-1 is the only choice for fault-tolerance if no more than two drives are desired.
RAID-2 is seldom used today since ECC is embedded in almost all modern disk drives.
RAID-3 can be used in data intensive or single-user environments which access long sequential records to speed up data transfer. However, RAID-3 does not allow multiple I/O operations to be overlapped and requires synchronized-spindle drives in order to avoid performance degradation with short records.
RAID-4 offers no advantages over RAID-5 and does not support multiple simultaneous write operations.
RAID-5 is the best choice in multi-user environments which are not write performance sensitive. However, at least three, and more typically five drives are required for RAID-5 arrays.

Hardware RAID
The hardware based system manages the RAID subsystem independently from the host and presents to the host only a single disk per RAID array. This way the host doesn't have to be aware of the RAID subsystems(s). The controller based hardware solutionDPT's SCSI controllers are a good example for a controller based RAID solution.The intelligent contoller manages the RAID subsystem independently from the host. The advantage over an external SCSI---SCSI RAID subsystem is that the contoller is able to span the RAID subsystem over multiple SCSI channels and and by this remove the limiting factor external RAID solutions have: The transfer rate over the SCSI bus.

The external hardware solution (SCSI---SCSI RAID)
An external RAID box moves all RAID handling "intelligence" into a contoller that is sitting in the external disk subsystem. The whole subsystem is connected to the host via a normal SCSI controller and apears to the host as a single or multiple disks.This solution has drawbacks compared to the contoller based solution: The single SCSI channel used in this solution creates a bottleneck. Newer technologies like Fiber Channel can ease this problem, especially if they allow to trunk multiple channels into a Storage Area Network.4 SCSI drives can already completely flood a parallel SCSI bus, since the average transfer size is around 4KB and the command transfer overhead - which is even in Ultra SCSI still done asynchonously - takes most of the bus time.

Software RAID (aka poor man's redundancy)
The MD driver in the Linux kernel is an example of a RAID solution that is completely hardware independent.The Linux MD driver supports currently RAID levels 0/1/4/5 + linear mode. Adaptecs AAA-RAID controllers are another example, they have no RAID functionality whatsoever on the controller, they depend on external drivers to provide all external RAID functionality. They are basically only multiple single AHA2940 controllers which have been integrated on one card. Linux detects them as AHA2940 and treats them accordingly.Every OS needs its own special driver for this type of RAID solution, this is error prone and not very compatible.



Hardware vs. Software RAID
Just like any other application, software-based arrays occupy host system memory, consume CPU cycles and are operating system dependent. By contending with other applications that are running concurrently for host CPU cycles and memory, software-based arrays degrade overall server performance. Also, unlike hardware-based arrays, the performance of a software-based array is directly dependent on server CPU performance and load.

Except for the array functionality, hardware-based RAID schemes have very little in common with software-based implementations. Since the host CPU can execute user applications while the array adapter's processor simultaneously executes the array functions, the result is true hardware multi-tasking. Hardware arrays also do not occupy any host system memory, nor are they operating system dependent.

Hardware arrays are also highly fault tolerant. Since the array logic is based in hardware, software is NOT required to boot. Some software arrays, however, will fail to boot if the boot drive in the array fails. For example, an array implemented in software can only be functional when the array software has been read from the disks and is memory-resident. What happens if the server can't load the array software because the disk that contains the fault tolerant software has failed? Software-based implementations commonly require a separate boot drive, which is NOT included in the array.

Sunday, June 5, 2005

RSS

If you are lazy to update your website, there's an alternative to it. It is called RSS feeder. With it, you have no need to update your website with latest news from other sites. If the sites provide RSS, you can feed your website with their news with RSS feeder and it's done automatically. But before that, you should know how to setup one. There are many RSS feeders in the market and I personally use MagPieRSS.

Add RSS feeds to your Web site using MagpieRSS
Syndication of material from other sites is a good way to get fresh content on your site. As visitors arrive at your site they can see a teaser for the syndicated content and a link to the publisher. If they are interested in an item they can follow the link to the original location. You can add syndicated content to your site using the Really Simple Syndication (RSS) protocol and a bit of PHP code in the form of an application called MagpieRSS. Here's how.

MagpieRSS is an RSS parser written in PHP. It supports RSS 0.9 and 1.0, with some RSS 2.0 support. MagpieRSS is a simple object-oriented backend which includes automatic caching of parsed RSS to reduce load on external Web sites. To use MagpieRSS on your Web site you will need PHP 4.2.0 (or greater) with XML (expat) support or PHP5 with libxml2 support. Download MagpieRSS. Extract the four main files (rss_fetch.inc, rss_parser.inc, rss_cache.inc, and rss_utils.inc) and the directory extlib and copy them to a directory named magpierss in the document root (where your web pages are stored) on your Web server.

Next, decide where the syndicated content will go on your site. If you want the content to appear on your front page then most likely you need to edit index.php. If you only have an index.html (not .php) then rename it to index.php as now you are adding some PHP code to it. Edit the file and add the following line at the top:


require_once('magpierss/rss_fetch.inc')


To actually fetch and parse the RSS add a call to fetch_rss() in your web page replacing the newsforge.com URL with the URL of the content you are syndicating.This function will return an array ($rss) that contains the syndicated content as an array of syndicated items along with some publisher information, such as the the name of the publisher (which is stored in $rss->channel['title']) and an optional description of the publisher (e.g. "The Online Newspaper for Linux and Open Source") which can be found in $rss->channel['description']. There is also $rss->channel['link'] which contains a general link to the publisher.The syndicated items can be accessed via $rss->items. A simple loop can be used to transverse the items one at a time:


foreach ($rss->items as $item)
{
// Your code here
}


Each item contains a title, link and description. $item['title'] contains the title of the article or story, $item['link'] is the link to the original and $item['description'] is the description of the story which is often the introduction paragraph or some sort of summary. To display a simple list of syndicated items use the following code:


?>?>?>?>channel['link'].">".$rss->channel['title']."";
foreach ($rss->items as $item) {
$href= $item['link'];
$title = $item['title'];
$desc = $item['description'];
echo "$title";
if($desc)
echo $desc;
}
?>


Of course, the output is simple HTML, but from here you should be able to expand the code to suit the style and design of your site.

Lengths
In the 0.91 version of the RSS specification the title is restricted to 100 characters and the description to 500. However there are no length limits in RSS 0.92 and greater. This can cause a problem for your Web site. Keeping a consistent look and feel to your Web site is important. If syndicated items are displayed too freely your site may start to look out-of-joint. Imagine you have similar code to the above listing news items in a side bar. What will happen if the description of a particular item is several thousand characters long? Or what if there was 50 items being syndicated?

Length validation is an important aspect to putting syndicated items on your site. To limit the number of items displayed, the easiest thing to do is slice the resulting array into a smaller chunk. After calling fetch_rss(), but before any processing, add the following line: $items = array_slice($rss->items, 0, 10); Which will shrink the array to contain only the first 10 items. Now the foreach loop can be run without any worries.

To validate the length of the description use the strlen() function:

if (strlen($desc) >= 80)
{
$desc = substr($desc,0,79)."...";
}


Here the description is shortened to be less than 80 characters with ... at the end to show that there is more text available. If you have shortened the description you might what to offer a link to the full syndicated text rather than a link to the original. That way users remain on your site longer. The full version of the syndicated item would then contain the link to the original. To do this you will need a script called something like readmore.php which uses MagpieRSS to display the full text as the main item on the web page along with your normal navigation, side bars and advertising. The parameters to the readmore.php would be the URL of the RSS feed and the item number you wish to display. One thing to watch is that some descriptions contain HTML as well as plain text. Blindly chopping the string could cause the string to be cut half way through a tag. Smart chopping is needed but that is beyond the scope of this article. However a crude solution is to use the PHP function strip_tags() which will remove all the HTML tags from the string. The full text version of the item displayed by the readmore.php can leave the tags intact.

Caching
To speed up your Web site and save unnecessary traffic for the publishers web server it is important to set up caching. MagpieRSS has built-in automatic caching. With caching MagpieRSS will only fetch and parse RSS feeds when the cache is too old.

To enable caching add the following lines to your script:

define('MAGPIE_CACHE_DIR',
'/tmp/mysite_magpie_cache');
define('MAGPIE_CACHE_ON', 1);


By default MagpieRSS will cache items for one hour. This can be overridden using:

define('MAGPIE_CACHE_AGE', 1800);


Where 1800 is the number of seconds to cache objects (e.g. 30 minutes, 60*30).

Conclusion
MagpieRSS makes it easy to add syndicated content to your Web site, with only a small time investment you can improve the content of your site. Remember "content is king."

Thanks to Gary Sims for this article.
Source : http://programming.linux.com/article.pl?sid=05/05/24/1337210

Friday, June 3, 2005

Do your softwares suit your business or are you trying to fit your business to your softwares?

When I was reading Linux Format magazine yesterday, I came across this statement of an advertisement. This mag is hard to find in Kuantan (only at a bookshop in MegaMall which I think I'm the only one who buy it :) ). Other mags I have on my bookshelf is Linux Users And Developers and Linux Magazine which my friend sent me from London. Thanks Mark.

It's good to read Linux mags every night before going to bed. You will fall asleep when you start reading it. :-)

Monday, May 30, 2005

What it takes to be a system administrator?

I found this interesting article "What it takes to be a system admin?". I'm interested to know how exactly the work system admin is due to the fact that I have no formal training in this field, I think I should make an effort to be one. Like ones say, "experience is the best teacher". No one shouldn't agree with this statement. This is why we need practice (a lot). During the practice, we will encounter mistakes. With that, you'll know what's the best, what's good and what's bad and what's the worst thing to do.

Let's take a look at the key points of being a system administrator:

  1. Change your mindset -- The true SA is a combination caretaker, security guard, and baby sitter
  2. Learn new toolsets -- If one wants to move into the corporate arena, then you must be able to take whatever tools are laying around and get the job done. This means the ability to learn new tools and to use old tools in new ways.
  3. Learn to handle pressure -- Expect to have to work with others looking over your shoulder. It adds a new level of pressure to have a Senior VP of a billion dollar company watching you type! (Or just the guy who signs your paycheck.)
  4. Never start from scratch. Find something close and modify -- For scripting, start with the boot up scripts (/sbin/init.d, /etc/rc.d, etc)
  5. Hang out with experts -- Don't be afraid of appearing ignorant. Fear staying ignorant
  6. Practice good debugging habits -- Understand it the way it is (broken) before you try to fix it
  7. Learn manually, then codify -- Remember the commands by writing a script for them and commenting the script.
  8. Document what you do -- Comment your scripts liberally. The best comments (IMHO) are the ones that explain 'Why?'.
  9. Learn to share -- Share what you've learned with others (that's why I'm doing this page)
  10. Remember to have fun -- Make Unix your passion, not just your job. Don't consider becoming a SA if it isn't your passion.
Source : Jim Wildman's Moving To The Big Time

Thursday, May 26, 2005

What is ethernet bonding?

In the new building, there will be 3 lines of broadband connection to the internet. I am thinking to make 'bonding' or port-trunking (cisco term) in order to provide load balancing and fault-tolerance connection. I have done this for dialup lines a few years ago. But this one is for ethernet. Every modern kernel supports this feature. Now let's take a look at what bonding is all about...

What is bonding?
Bonding is the same as port trunking. In the following I will use the word bonding because practically we will bond interfaces as one.

#!/bin/bash

modprobe bonding mode=0 miimon=100 # load bonding module

ifconfig eth0 down # putting down the eth0 interface
ifconfig eth1 down # putting down the eth1 interface

ifconfig bond0 hw ether 00:11:22:33:44:55 # changing the MAC address of the bond0 interface
ifconfig bond0 192.168.55.55 up # to set ethX interfaces as slave the bond0 must have an ip.

ifenslave bond0 eth0 # putting the eth0 interface in the slave mod for bond0
ifenslave bond0 eth1 # putting the eth1 interface in the slave mod for bond0

You can set up your bond interface according to your needs. Changing one parameters (mode=X) you can have the following bonding types:

mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

Pre-requisites:
1. Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
2. A switch that supports IEEE 802.3ad Dynamic link aggregation.Most switches will require some type of configuration to enable 802.3ad mode.mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisite:
Ethtool support in the base drivers for retrieving the speed of each slave.mode=6 (balance-alb)

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

The most used are the first four mode types...

Also you can use multiple bond interface but for that you must load the bonding module as many as you need. Presuming that you want two bond interface you must configure the /etc/modules.conf as follow:

alias bond0 bonding
options bond0 -o bond0 mode=0 miimon=100
alias bond1 bonding
options bond1 -o bond1 mode=1 miimon=100Notes:


To restore your slaves MAC addresses, you need to detach them from the bond (`ifenslave -d bond0 eth0'). The bonding driver will then restore the MAC addresses that the slaves had before they were enslaved. The bond MAC address will be the taken from its first slave device.

Promiscous mode: According to your bond type, when you put the bond interface in the promiscous mode it will propogates the setting to the slave devices as follow:

for mode=0,2,3 and 4 the promiscuous mode setting is propogated to all slaves.
for mode=1,5 and 6 the promiscuous mode setting is propogated only to the active slave.
For balance-tlb mode the active slave is the slave currently receiving inbound traffic, for balance-alb mode the active slave is the slave used as a "primary." and for the active-backup, balance-tlb and balance-alb modes, when the active slave changes (e.g., due to a link failure), the promiscuous setting will be propogated to the new active slave.

Source : http://www.linuxhorizon.ro/bonding.html

Tuesday, May 24, 2005

The modem is broken

Today found out that the modem at my workplace broke. I have to contact TMNet to replace it coz it's still under warranty. Thank god, the replacement is a snap. Tomorrow, staff can connect to the internet.

I'm a system admin.

Me, MySelf And I

After being a system admin for 4 months, i think that this is my dream job but the pay is still not enough for me. I've learnt so many things about Linux and how to maintain them (I have 3 servers that i have to monitor). May things on my mind ranging from backup (the hardest if u want to transfer accounts from one server to another server.), tcp/ip, network design, firewall (this is fun!), routing config and etc.

Renew letsencrypt ssl certificate for zimbra 8.8.15

 Letsencrypt certs usually consists of these files: 1. cert.pem 2. chain.pem 3. fullchain.pem 4. privkey.pem I am not going to discuss about...