tag:blogger.com,1999:blog-44324689425211097302024-02-21T09:49:37.437-08:00Linux Tech SupportPraveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.comBlogger19125tag:blogger.com,1999:blog-4432468942521109730.post-73283095613131118272011-02-04T09:11:00.000-08:002011-02-04T09:16:45.280-08:00Setup an online repository for RHELrpm -ivh http://packages.sw.be/rpmforge-release/rpmforge-release-0.3.6-1.el4.rf.i386.rpm<br /><br />yum install Package_name<br /><br />Sample rpmforge.repo file:<br />===========================<br /># Name: RPMforge RPM Repository for Red Hat Enterprise 5 - dag<br /># URL: http://rpmforge.net/<br />[rpmforge]<br />name = Red Hat Enterprise $releasever - RPMforge.net - dag<br />#baseurl = http://apt.sw.be/redhat/el5/en/$basearch/dag<br />mirrorlist = http://apt.sw.be/redhat/el5/en/mirrors-rpmforge<br />#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge<br />enabled = 1<br />protect = 0<br />gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag<br />gpgcheck = 1Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com3tag:blogger.com,1999:blog-4432468942521109730.post-19748816561111763602008-10-24T00:31:00.000-07:002008-10-24T00:44:54.903-07:00Introduction to iostat , vmstat and netstat<a name="Input Output statistics ( iostat )"><strong>Input Output statistics ( iostat )</strong></a><br />iostat reports terminal and disk I/O activity and CPU utilization. The first line of output is for the time period since boot & each subsequent line is for the prior interval . Kernel maintains a number of counters to keep track of the values.<br />iostat's activity class options default to tdc (terminal, disk, and CPU). If any other option/s are specified, this default is completely overridden i.e. iostat -d will report only statistics about the disks.<br /><br /><a name="iostatsyntax:">syntax:</a><br />Basic synctax is iostat <options>interval count<br />option - let you specify the device for which information is needed like disk , cpu or terminal. (-d , -c , -t or -tdc ) . x options gives the extended statistics .<br />interval - is time period in seconds between two samples . iostat 4 will give data at each 4 seconds interval.<br />count - is the number of times the data is needed . iostat 4 5 will give data at 4 seconds interval 5 times<br /><br />--><br /><br /><a name="iostatExample">Example</a><br />$ iostat -xtc 5 2<br />extended disk statistics tty cpu<br />disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b tin tout us sy wt id<br />sd0 2.6 3.0 20.7 22.7 0.1 0.2 59.2 6 19 0 84 3 85 11 0<br />sd1 4.2 1.0 33.5 8.0 0.0 0.2 47.2 2 23<br />sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0<br />sd3 10.2 1.6 51.4 12.8 0.1 0.3 31.2 3 31<br /><br />The fields have the following meanings:<br />disk name of the disk<br />r/s reads per second<br />w/s writes per second<br />Kr/s kilobytes read per second<br />Kw/s kilobytes written per second<br />wait average number of transactions waiting for service (Q length) actv average number of transactions actively being serviced (removed from the queue but not yet<br />completed)<br />%w percent of time there are transactions waiting<br />for service (queue non-empty)<br />%b percent of time the disk is busy (transactions<br />in progress)<br /><br /><a name="Results and Solutions:iostat">Results and Solutions:</a><br />The values to look from the iostat output are:<br />Reads/writes per second (r/s , w/s)<br />Percentage busy (%b)<br />Service time (svc_t)<br />If a disk shows consistently high reads/writes along with , the percentage busy (%b) of the disks is greater than 5 percent, and the average service time (svc_t) is greater than 30 milliseconds, then one of the following action needs to be taken<br />1.)Tune the application to use disk i/o more efficiently by modifying the disk queries and using available cache facilities of application servers .<br />2.) Spread the file system of the disk on to two or more disk using disk striping feature of volume manager /disksuite etc.<br />3.) Increase the system parameter values for inode cache , ufs_ninode , which is Number of inodes to be held in memory. Inodes are cached globally (for UFS), not on a per-file system basis 4.) Move the file system to another faster disk /controller or replace existing disk/controller to a faster one.<br /><br /><br /><br /><a name="Virtual Memory Statistics ( vmstat )"><strong>Virtual Memory Statistics ( vmstat )</strong></a><br />vmstat - vmstat reports virtual memory statistics of process, virtual memory, disk, trap, and CPU activity. On multicpu systems , vmstat averages the number of CPUs into the output. For per-process statistics .Without options, vmstat displays a one-line summary of the virtual memory activity since the system was booted. <a name="syntax:vmstat">syntax:</a><br />Basic synctax is vmstat <options>interval count<br />option - let you specify the type of information needed such as paging -p , cache -c ,.interrupt -i etc.<br />if no option is specified information about process , memory , paging , disk ,interrupts & cpu is displayed .<br />interval - is time period in seconds between two samples . vmstat 4 will give data at each 4 seconds interval.<br />count - is the number of times the data is needed . vmstat 4 5 will give data at 4 seconds interval 5 times. <a name="Example:vmstat">Example</a> The following command displays a summary of what the system<br />is doing every five seconds.<br /><br /><br />example% vmstat 5<br />procs memory page disk faults cpu<br />r b w swap free re mf pi p fr de sr s0 s1 s2 s3 in sy cs us sy id<br />0 0 0 11456 4120 1 41 19 1 3 0 2 0 4 0 0 48 112 130 4 14 82<br />0 0 1 10132 4280 0 4 44 0 0 0 0 0 23 0 0 211 230 144 3 35 62<br />0 0 1 10132 4616 0 0 20 0 0 0 0 0 19 0 0 150 172 146 3 33 64<br />0 0 1 10132 5292 0 0 9 0 0 0 0 0 21 0 0 165 105 130 1 21 78<br /><br /><br />The fields of vmstat's display are procs r in run queue b blocked for resources I/O, paging etc. w swapped memory (in Kbytes) swap - amount of swap space currently available free - size of the free list page ( in units per second). re page reclaims - see -S option for how this field is modified. mf minor faults - see -S option for how this field is modified. pi kilobytes paged in po kilobytes paged out fr kilobytes freed de anticipated short-term memory shortfall (Kbytes) sr pages scanned by clock algorithm disk ( operations per second )<br /><br />There are slots for up to four disks, labeled with a single letter and number.<br />The letter indicates the type of disk (s = SCSI, i = IPI, etc) . The number is<br />the logical unit number. faults in (non clock) device interrupts sy system calls cs CPU context switches cpu - breakdown of percentage usage of CPU time. On multiprocessors this is an average across all processors.<br />us user time sy system time id idle time<br /><br /><a name="Results and Solutions:vmstat">Results and Solutions:</a><br />A. CPU issues:<br />Following columns has to be watched to determine if there is any cpu issue<br />Processes in the run queue (procs r)<br />User time (cpu us)<br />System time (cpu sy)<br />Idle time (cpu id)<br />procs cpu<br />r b w us sy id<br />0 0 0 4 14 82<br />0 0 1 3 35 62<br />0 0 1 3 33 64<br />0 0 1 1 21 78<br />Problem symptoms:<br />1.) If the number of processes in run queue (procs r) are consistently greater than the number of CPUs on the system it will slow down system as there are more processes then available CPUs .<br />2.) if this number is more than four times the number of available CPUs in the system then system is facing shortage of cpu power and will greatly slow down the processess on the system.<br />3.) If the idle time (cpu id) is consistently 0 and if the system time (cpu sy) is double the user time (cpu us) system is facing shortage of CPU resources.<br /><br />Resolution :<br />Resolution to these kind of issues involves tuning of application procedures to make efficient use of cpu and as a last resort increasing the cpu power or adding more cpu to the system.<br /><br /><br />B. Memory Issues:<br />Memory bottlenecks are determined by the scan rate (sr) . The scan rate is the pages scanned by the clock algorithm per second. If the scan rate (sr) is continuously over 200 pages per second then there is a memory shortage.<br /><br />Resolution :<br />1. Tune the applications & servers to make efficient use of memory and cache.<br />2. Increase system memory .<br />3. Implement priority paging in s in pre solaris 8 versions by adding line "set priority paging=1" in /etc/system. Remove this line if upgrading from Solaris 7 to 8 & retaining old /etc/system file.<br /><br /><br /><br /><a name="netstat"><strong>Network Statistics (netstat)</strong></a><br />netstat displays the contents of various network-related data structures in depending on the options selected.<br /><a name="Syntax :netstat">Syntax :</a><br />netstat <option><br />multiple options can be given at one time.<br />Options<br />-a - displays the state of all sockets. -r - shows the system routing tables -i - gives statistics on a per-interface basis.-m - displays information from the network memory buffers. On Solaris, this shows statistics<br />forSTREAMS -p [proto] - retrieves statistics for the specified protocol -s - shows per-protocol statistics. (some implementations allow -ss to remove fileds with a value of 0 (zero) from the display.) -D - display the status of DHCP configured interfaces. -n do not lookup hostnames, display only IP addresses.<br />-d (with -i) displays dropped packets per interface.<br />-I [interface] retrieve information about only the specified interface.<br />-v be verbose<br />interval - number for continuous display of statictics.<br /><br /><a name="Examples 1:netstat">Example :</a><br />$netstat -rnRouting Table: IPv4 Destination Gateway Flags Ref Use Interface-------------------- -------------------- ----- ----- ------ ---------192.168.1.0 192.168.1.11 U 1 1444 le0224.0.0.0 192.168.1.11 U 1 0 le0default 192.168.1.1 UG 1 68276 127.0.0.1 127.0.0.1 UH 1 10497 lo0<br />This shows the output on a Solaris machine who's IP address is 192.168.1.11 with a default router at 192.168.1.1<br /><br /><a name="Results and Solutions:netstat">Results and Solutions:</a><br />A.) Network availability<br />The command as above is mostly useful in troubleshooting network accessibility issues . When outside network is not accessible from a machine check the following<br />1. if the default router ip address is correct<br />2. you can ping it from your machine.<br />3. If router address is incorrect it can be changed with route add commnad . See <a style="COLOR: blue; TEXT-DECORATION: none" href="http://sunsite.eunnet.net:8888/ab2/coll.40.6/REFMAN1M/@Ab2PageView/idmatch(route-1m)">man route </a>for more info .<br /><br />route command examples:$route add default <hostname><br />$route add 192.0.2.32 <gateway_name><br />If the router address is correct but still you can't ping it there may be some network cable /hub/switch problem and you have to try and eliminate the faulty component .<br /><br />B.) Network Response<br />This option is used to diagnose the network problems when the connectivity is there but it is slow in response .<br />Values to look at:<br />Collisions (Collis)<br />Output packets (Opkts)<br />Input errors (Ierrs)<br />Input packets (Ipkts)<br /><br />The above values will give information to workout<br />i. Network collision rate as follows :<br />Network collision rate = Output collision counts / Output packets<br />Network-wide collision rate greater than 10 percent will indicate<br />Overloaded network,<br />Poorly configured network,<br />Hardware problems.<br />ii. Input packet error rate as follows :<br />Input Packet Error Rate = Ierrs / Ipkts.<br />If the input error rate is high (over 0.25 percent), the host is dropping packets. Hub/switch cables etc needs to be checked for potential problems. C. Network socket & TCP Cconnection stateNetstat gives important information about network socket and tcp state . This is very useful in finding out the open , closed and waiting network tcp connection .Network states returned by netstat are following :<br /><br />CLOSED ---- Closed. The socket is not being used. LISTEN ---- Listening for incoming connections. SYN_SENT ---- Actively trying to establish connection. SYN_RECEIVED ---- Initial synchronization of the connection under way. ESTABLISHED ---- Connection has been established. CLOSE_WAIT ---- Remote shut down; waiting for the socket to close. FIN_WAIT_1 ---- Socket closed; shutting down connection. CLOSING ---- Closed, then remote shutdown; awaiting acknowledgement. LAST_ACK ---- Remote shut down, then closed ;awaiting acknowledgement. FIN_WAIT_2 ---- Socket closed; waiting for shutdown from remote. TIME_WAIT ---- Wait after close for remote shutdown retransmission.<br /><br />Example:<br />#netstat -a<br />if you see a lots of connections in FIN_WAIT state tcp/ip parameters have to be tuned because the connections are not being closed and they gets accumulating . After some time system may run out of resource . TCP parameter can be tuned to define a time out so that connections can be released and used by new connection.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-14808184916638179152008-10-23T04:09:00.000-07:002008-10-23T04:13:05.717-07:00What exactly is a load average?<a title="Load Average" style="FLOAT: right; MARGIN: 0.5em" href="http://vino2vino.com/mrtg/localhost.load.html"></a>If you’ve spent some time on a Unix or Unix-like machine (e.g., Linux, OS X, Solaris, etc.) then you’re probably at least vaguely familiar with the concept of a load average. A system’s load average can be easily determined from the Unix shell by running the uptime command:mmalone@www:~$ uptime<br />15:37:38 up 133 days, 3:37, 3 users,<br />load average: 0.37, 0.37, 0.41<br />The load average is also displayed by the w and top commands, and by pretty much every system monitoring package on the planet. But what the heck is a load average, exactly?<br />To most people, a load average is some mysterious number that is somehow related to the amount of work that their computer is currently handling. But what is a good load average, and how high is too high? The answer is actually quite simple. But first you have to understand what the load average is actually measuring.<br />Without getting into the vagaries of every Unix-like operating system in existence, the load average more or less represents the average number of processes that are in the running (using the CPU) or runnable (waiting for the CPU) states. One notable exception exists: Linux includes processes in uninterruptible sleep states, typically waiting for some I/O activity to complete. This can markedly increase the load average on Linux systems.<br />The load average is calculated as an <a href="http://en.wikipedia.org/wiki/Moving_average_%28technical_analysis%29#Exponential_moving_average" modo="false">exponential moving average</a> of the load number (the number of processes that are running or runnable). The three numbers returned as the system’s load average represent the one, five, and fifteen minute moving load average of the system.<br />So, for a single processor machine a load average of 1 means that, on average, there is always a process in the running or runnable state. Thus, the CPU is being utilized 100% of the time and is at capacity. If you tried to run another process, it would have to wait in the run queue before being executed. For multiprocessor systems, however, the system isn’t CPU bound until the load average equals the number of processors (or cores, for multi-core processors) in the machine. My database server, for example, has two dual core processors. Thus, the system isn’t fully utilized until the load average reaches 4.<br />In summary, the load average is a moving average of the number of processes in the running or runnable states. You shouldn’t be worried about your system’s load unless it is consistently higher than the number of processors (or cores) in your machine. In general, you can calculate a system’s CPU utilization by dividing the load average by the number of processors/cores in the system.<br /><br />================================================================<br /><br />1 UNIX Commands<br />Actually, load average is not a UNIX command in the conventional sense. Rather it's an embedded metric that appears in the output of other UNIX commands like uptime and procinfo. These commands are commonly used by UNIX sysadmin's to observe system resource consumption. Let's look at some of them in more detail.<br /><a name="tth_sEc1.1">1.1</a> Classic OutputThe generic ASCII textual format appears in a variety of UNIX shell commands. Here are some common examples.<br />uptimeThe uptime shell command produces the following output:<br />[pax:~]% uptime<br />9:40am up 9 days, 10:36, 4 users, load average: 0.02, 0.01, 0.00<br />It shows the time since the system was last booted, the number of active user processes and something called the load average.<br />procinfoOn Linux systems, the procinfo command produces the following output:<br />[pax:~]% procinfo<br />Linux 2.0.36 (root@pax) (gcc 2.7.2.3) #1 Wed Jul 25 21:40:16 EST 2001 [pax]<br />Memory: Total Used Free Shared Buffers Cached<br />Mem: 95564 90252 5312 31412 33104 26412<br />Swap: 68508 0 68508<br />Bootup: Sun Jul 21 15:21:15 2002 Load average: 0.15 0.03 0.01 2/58 8557<br />...<br />The load average appears in the lower left corner of this output.<br />wThe w(ho) command produces the following output:<br />[pax:~]% w<br />9:40am up 9 days, 10:35, 4 users, load average: 0.02, 0.01, 0.00<br />USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT<br />mir ttyp0 :0.0 Fri10pm 3days 0.09s 0.09s bash<br />neil ttyp2 12-35-86-1.ea.co 9:40am 0.00s 0.29s 0.15s w<br />...<br />Notice that the first line of the output is identical to the output of the uptime command.<br />topThe top command is a more recent addition to the UNIX command set that ranks processes according to the amount of CPU time they consume. It produces the following output:<br />4:09am up 12:48, 1 user, load average: 0.02, 0.27, 0.17<br />58 processes: 57 sleeping, 1 running, 0 zombie, 0 stopped<br />CPU states: 0.5% user, 0.9% system, 0.0% nice, 98.5% idle<br />Mem: 95564K av, 78704K used, 16860K free, 32836K shrd, 40132K buff<br />Swap: 68508K av, 0K used, 68508K free 14508K cched<br />PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND<br />5909 neil 13 0 720 720 552 R 0 1.5 0.7 0:01 top<br />1 root 0 0 396 396 328 S 0 0.0 0.4 0:02 init<br />2 root 0 0 0 0 0 SW 0 0.0 0.0 0:00 kflushd<br />3 root -12 -12 0 0 0 SW< 0 0.0 0.0 0:00 kswapd<br />...<br />In each of these commands, note that there are three numbers reported as part of the load average output. Quite commonly, these numbers show a descending order from left to right. Occasionally, however, an ascending order appears e.g., like that shown in the top output above.<br /><a name="tth_sEc1.2">1.2</a> GUI OutputThe load average can also be displayed as a time series like that shown here in some output from a tool called <a href="http://www.orcaware.com/orca/docs/orcallator.html#processes_in_run_queue_system_load" target="_new">ORCA</a>. <a name="tth_fIg1"></a><br /><a name="fig:ladaily"></a><br />Figure 1: ORCA plot of the 3 daily load averages. <br />Although such visual aids help us to see that the green curve is more spikey and has more variability than the red curve, and it allows us to see a complete day's worth of data, it's not clear how useful this is for capacity planning or performance analysis. We need to understand more about how the load average metric is defined and calculated.<br /><a name="tth_sEc2">2</a> So What Is It?So, exactly what is this thing called load average that is reported by all these various commands? Let's look at the official UNIX documentation.<br /><a name="tth_sEc2.1">2.1</a> The man Page<br />[pax:~]% man "load average"<br />No manual entry for load average<br />Oops! There is no man page! The load average metric is an output embedded in other commands so it doesn't get its own man entry. Alright, let's look at the man page for uptime, for example, and see if we can learn more that way.<br />...<br />DESCRIPTION<br />uptime gives a one line display of the following informa-<br />tion. The current time, how long the system has been run-<br />ning, how many users are currently logged on, and the sys-<br />tem load averages for the past 1, 5, and 15 minutes.<br />...<br />So, that explains the three metrics. They are the "... load averages for the past 1, 5, and 15 minutes."<br />Which are the GREEN, BLUE and RED curves, respectively, in Figure <a href="http://www.teamquest.com/resources/gunther/display/5/#fig:ladaily">1</a> above. Unfortunately, that still begs the question "What is the load?<br /><a name="tth_sEc2.2">2.2</a> What the Gurus Have to SayLet's turn to some UNIX hot-shots for more enlightenment.<br />Tim O'Reilly and CrewThe book UNIX Power Tools [<a href="http://www.teamquest.com/resources/gunther/display/5/#powertools" name="CITEpowertools">POL97</a>], tell us on p.726 The CPU:<br />The load average tries to measure the number of active processes at any time. As a measure of CPU utilization, the load average is simplistic, poorly defined, but far from useless.That's encouraging! Anyway, it does help to explain what is being measured: the number of active processes. On p.720 39.07 Checking System Load: uptime it continues ...<br />... High load averages usually mean that the system is being used heavily and the response time is correspondingly slow.<br />What's high? ... Ideally, you'd like a load average under, say, 3, ... Ultimately, 'high' means high enough so that you don't need uptime to tell you that the system is overloaded.Hmmm ... where did that number "3" come from? And which of the three averages (1, 5, 15 minutes) are they referring to?<br />Adrian Cockcroft on SolarisIn Sun Performance and Tuning [<a href="http://www.teamquest.com/resources/gunther/display/5/#cock" name="CITEcock">Coc95</a>] in the section on p.97 entitled: Understanding and Using the Load Average, Adrian Cockcroft states:<br />The load average is the sum of the run queue length and the number of jobs currently running on the CPUs. In Solaris 2.0 and 2.2 the load average did not include the running jobs but this bug was fixed in Solaris 2.3.So, even the "big boys" at Sun can get it wrong. Nonetheless, the idea that the load average is associated with the CPU run queue is an important point.<br />O'Reilly et al. also note some potential gotchas with using load average ...<br />...different systems will behave differently under the same load average. ... running a single cpu-bound background job .... can bring response to a crawl even though the load avg remains quite low.As I will demonstrate, this depends on when you look. If the CPU-bound process runs long enough, it will drive the load average up because its always either running or runable. The obscurities stem from the fact that the load average is not your average kind of average. As we alluded to in the above introduction, it's a time-dependentaverage. Not only that, but it's a damped time-dependent average. To find out more, let's do some controlled experiments.<br /><a name="tth_sEc3">3</a> Performance ExperimentsThe experiments described in this section involved running some workloads in background on single-CPU Linux box. There were two phases in the test which has a duration of 1 hour:<br />CPU was pegged for 2100 seconds and then the processes were killed.<br />CPU was quiescent for the remaining 1500 seconds.<br />A Perl script sampled the load average every 5 minutes using the uptime command. Here are the details.<br /><a name="tth_sEc3.1">3.1</a> Test LoadTwo hot-loops were fired up as background tasks on a single CPU Linux box. There were two phases in the test:<br />The CPU is pegged by these tasks for 2,100 seconds.<br />The CPU is (relatively) quiescent for the remaining 1,500 seconds.<br />The 1-minute average reaches a value of 2 around 300 seconds into the test. The 5-minute average reaches 2 around 1,200 seconds into the test and the 15-minute average would reach 2 at around 3,600 seconds but the processes are killed after 35 minutes (i.e., 2,100 seconds).<br /><a name="tth_sEc3.2">3.2</a> Process SamplingAs the authors [<a href="http://www.teamquest.com/resources/gunther/display/5/#linuxk" name="CITElinuxk">BC01</a>] explain about the Linux kernel, because both of our test processes are CPU-bound they will be in a TASK_RUNNING state. This means they are either:<br />running i.e., currently executing on the CPU<br />runnable i.e., waiting in the run_queue for the CPU<br />The Linux kernel also checks to see if there are any tasks in a short-term sleep state called TASK_UNINTERRUPTIBLE. If there are, they are also included in the load average sample. There were none in our test load.<br />The following <a href="http://lxr.linux.no/linux/kernel/timer.c#L599" target="_new">source fragment</a> reveals more details about how this is done.<br />600 * Nr of active tasks - counted in fixed-point numbers<br />601 */<br />602 static unsigned long count_active_tasks(void)<br />603 {<br />604 struct task_struct *p;<br />605 unsigned long nr = 0;<br />606<br />607 read_lock(&tasklist_lock);<br />608 for_each_task(p) {<br />609 if ((p->state == TASK_RUNNING<br />610 (p->state & TASK_UNINTERRUPTIBLE)))<br />611 nr += FIXED_1;<br />612 }<br />613 read_unlock(&tasklist_lock);<br />614 return nr;<br />615 }<br />So, uptime is sampled every 5 seconds which is the linux kernel's intrinsic timebase for updating the load average calculations.<br /><a name="tth_sEc3.3">3.3</a> Test ResultsThe results of these experiments are plotted in Fig. <a href="http://www.teamquest.com/resources/gunther/display/5/#fig:LAFull">2</a>. NOTE: These colors do not correspond to those used in the ORCA plots like Figure <a href="http://www.teamquest.com/resources/gunther/display/5/#fig:ladaily">1</a>.<br />Although the workload starts up instantaneously and is abruptly stopped later at 2100 seconds, the load average values have to catch up with the instantaneous state. The 1-minute samples track the most quickly while the 15-minute samples lag the furthest. <a name="tth_fIg2"></a><a name="fig:LAFull"></a><br />Figure 2: Linux load average test results. <br />For comparison, here's how it looks for a single hot-loop running on a single-CPU Solaris system.<a name="tth_fIg3"></a><a name="fig:LAFSol"></a><br />Figure 3: Solaris load average test results. <br />You would be forgiven for jumping to the conclusion that the "load" is the same thing as the CPU utilization. As the Linux results show, when two hot processes are running, the maximum load is two (not one) on a single CPU. So, load is not equivalent to CPU utilization.<br />From another perspective, Fig. <a href="http://www.teamquest.com/resources/gunther/display/5/#fig:LAFull">2</a> resembles the charging <a name="tth_fIg4"></a><a name="fig:rccap"></a><br />Figure 4: Charging and discharging of a capacitor. <br />and discharging of a capacitive RC circuit.<a name="tth_sEc4"></a><a name="sec:kernel"></a><br />4 Kernel Magic<br /><a href="http://www.teamquest.com/resources/gunther/display/6/index.htm">An Addendum</a><br />Now let's go inside the <a href="http://lxr.linux.no/linux/kernel/timer.c#L623" target="_new">Linux kernel</a> and see what it is doing to generate these load average numbers.<br />unsigned long avenrun[3];<br />624<br />625 static inline void calc_load(unsigned long ticks)<br />626 {<br />627 unsigned long active_tasks; /* fixed-point */<br />628 static int count = LOAD_FREQ;<br />629<br />630 count -= ticks;<br />631 if (count < 0) {<br />632 count += LOAD_FREQ;<br />633 active_tasks = count_active_tasks();<br />634 CALC_LOAD(avenrun[0], EXP_1, active_tasks);<br />635 CALC_LOAD(avenrun[1], EXP_5, active_tasks);<br />636 CALC_LOAD(avenrun[2], EXP_15, active_tasks);<br />637 }<br />638 }<br />The countdown is over a LOAD_FREQ of 5 HZ. How often is that?<br />1 HZ = 100 ticks<br />5 HZ = 500 ticks<br />1 tick = 10 milliseconds<br />500 ticks = 5000 milliseconds (or 5 seconds)<br />So, 5 HZ means that CALC_LOAD is called every 5 seconds.<br /><a name="tth_sEc4.1">4.1</a> Magic NumbersThe function CALC_LOAD is a macro defined in <a href="http://lxr.linux.no/linux/include/linux/sched.h#L48" target="_new">sched.h</a><br />58 extern unsigned long avenrun[]; /* Load averages */<br />59<br />60 #define FSHIFT 11 /* nr of bits of precision */<br />61 #define FIXED_1 (1<<FSHIFT) /* 1.0 as fixed-point */<br />62 #define LOAD_FREQ (5*HZ) /* 5 sec intervals */<br />63 #define EXP_1 1884 /* 1/exp(5sec/1min) as fixed-point */<br />64 #define EXP_5 2014 /* 1/exp(5sec/5min) */<br />65 #define EXP_15 2037 /* 1/exp(5sec/15min) */<br />66<br />67 #define CALC_LOAD(load,exp,n) \<br />68 load *= exp; \<br />69 load += n*(FIXED_1-exp); \<br />70 load >>= FSHIFT;<br />A noteable curiosity is the appearance of those magic numbers: 1884, 2014, 2037. What do they mean? If we look at the preamble to the code we learn,<br />/*<br />49 * These are the constant used to fake the fixed-point load-average<br />50 * counting. Some notes:<br />51 * - 11 bit fractions expand to 22 bits by the multiplies: this gives<br />52 * a load-average precision of 10 bits integer + 11 bits fractional<br />53 * - if you want to count load-averages more often, you need more<br />54 * precision, or rounding will get you. With 2-second counting freq,<br />55 * the EXP_n values would be 1981, 2034 and 2043 if still using only<br />56 * 11 bit fractions.<br />57 */<br />These magic numbers are a result of using a fixed-point (rather than a floating-point) representation.<br />Using the 1 minute sampling as an example, the conversion of exp(5/60) into base-2 with 11 bits of precision occurs like this:<br />e5 / 60 ®<br />e5 / 60<br />211<br />(1)But EXP_M represents the inverse function exp(-5/60). Therefore, we can calculate these magic numbers directly from the formula, <a name="eqn:magic"></a><br />EXP_M =<br />211<br />2 5 log2(e) / 60M<br />(2)where M = 1 for 1 minute sampling. Table <a href="http://www.teamquest.com/resources/gunther/display/5/#tab:magic">1</a> summarizes some relevant results.<br /><a name="tth_tAb1"></a><a name="tab:magic"></a><br />T<br />EXP_T<br />Rounded<br />5/60<br />1884.25<br />1884<br />5/300<br />2014.15<br />2014<br />5/900<br />2036.65<br />2037<br />2/60<br />1980.86<br />1981<br />2/300<br />2034.39<br />2034<br />2/900<br />2043.45<br />2043 Table 1: Load Average magic numbers.<br />These numbers are in complete agreement with those mentioned in the kernel comments above. The fixed-point representation is used presumably for efficiency reasons since these calculations are performed in kernel space rather than user space.<br />One question still remains, however. Where do the ratios like exp(5/60) come from?<br /><a name="tth_sEc4.2">4.2</a> Magic RevealedTaking the 1-minute average as the example, CALC_LOAD is identical to the mathematical expression: <a name="eqn:expdamp"></a><br />load(t) = load(t-1) e-5/60 + n (1 - e-5/60)<br />(3)If we consider the case n = 0, eqn.(<a href="http://www.teamquest.com/resources/gunther/display/5/#eqn:expdamp">3</a>) becomes simply: <a name="eqn:decay"></a><br />load(t) = load(t-1) e-5/60<br />(4)If we iterate eqn.(<a href="http://www.teamquest.com/resources/gunther/display/5/#eqn:decay">4</a>), between t = t0 and t = T we get:<br />load(tT) = load(t0) e-5t/60<br />(5)which is pure exponential decay, just as we see in Fig. <a href="http://www.teamquest.com/resources/gunther/display/5/#fig:LAFull">2</a> for times between t0 = 2100 and tT = 3600. Conversely, when n = 2 as it was in our experiments, the load average is dominated by the second term such that:<br />load(tT) = 2 load(t0) (1 - e-5t/60)<br />(6)which is a monotonically increasing function just like that in Fig. <a href="http://www.teamquest.com/resources/gunther/display/5/#fig:LAFull">2</a> between t0 = 0 and tT = 2100.<br /><a name="tth_sEc5">5</a> Summary<br />So, what have we learned? Those three innocuous looking numbers in the LA triplet have a surprising amount of depth behind them.<br />The triplet is intended to provide you with some kind of information about how much work has been done on the system in the recent past (1 minute), the past (5 minutes) and the distant past (15 minutes).<br />As you will have discovered if you tried the <a href="http://www.teamquest.com/resources/gunther/display/4/index.htm">LA Triplets</a> quiz, there are problems:<br />The "load" is not the utilization but the total queue length.<br />They are point samples of three different time series.<br />They are exponentially-damped moving averages.<br />They are in the wrong order to represent trend information.<br />These inherited limitations are significant if you try to use them for capacity planning purposes. I'll have more to say about all this in the next online column Load Average Part II: Not Your Average Average.<br /><br />1 Recap of Part 1<br />This is the second in a <a href="http://www.teamquest.com/resources/gunther/display/5/index.htm">two part-series</a> where I explore the use of averages in performance analysis and capacity planning. There are many manifestations of averages e.g., arithmetic average (the usual one), moving average (often used in financial planning), geometric average (used in the SPEC CPU benchmarks), harmonic average (not used enough), just to name a few.<br />In Part 1, I described some simple experiments that revealed how the load averages (the LA Triplets) are calculated in the UNIX® kernel (well, the Linux kernel anyway since that source code is available online). We discovered a C-macro called CALC_LOAD that does all the work. Taking the 1-minute average as the example, CALC_LOAD is identical to the mathematical expression: <a name="eqn:expdamp"></a><br />load(t) = load(t - 1) e-5/60 + n (1 - e-5/60)<br />(1)<br />which corresponds to an exponentially-damped moving average. It says that your current load is equal to the load you had last time (decayed by an exponential factor appropriate for 1-minute reporting) plus the number of currently active processes (weighted by a exponentially increasing factor appropriate for 1-minute reporting). The only difference between the 1-minute load average shown here and the 5- and 15-minute load averages is the value of the exponential factors; the magic numbers I discussed in Part 1.<br />Another point I made in Part 1 was that we, as performance analysts, would be better off if the LA Triplets were reported in the reverse order: 15, 5, 1, because that ordering concurs with usual convention that temporal order flows left to right. In this way it would be easier to read the LA Triplets as a trend (which was part of the original intent, I suspect). Trending information could be enhanced even further by representing the LA Triplets using animation (of the type I showed in the <a href="http://www.teamquest.com/resources/gunther/display/4/index.htm">Quiz</a>).<br />Here, in Part 2, I'll compare the UNIX load averaging approach with other averaging methods as they apply to capacity planning and performance analysis.<br /><a name="tth_sEc2">2</a> Exponential Smoothing<br />Exponential smoothing (also called filtering by electrical engineering types) is a general purpose way of prepping highly variable data before further analysis. Filters of this type are available in most data analysis tools such as: <a href="http://office.microsoft.com/en-us/excel/default.aspx" target="_new">EXCEL</a>, <a href="http://www.wolfram.com/" target="_new">Mathematica</a>, and <a href="http://www.minitab.com/" target="_new">Minitab</a>.<br />The smoothing equation is an iterative function that has the general form: <a name="eqn:smooth"></a><br />Y(t)smoothed<br />= Y(t - 1) +<br />aconstant<br />X(t)raw<br />- Y(t-1)<br />(2)<br />where X(t) is the input raw data, Y(t - 1) is the value due to the previous smoothing iteration and Y(t) is the new smoothed value. If it looks a little incestuous, it's supposed to be.<br /><a name="tth_sEc2.1">2.1</a> Smooth Loads Expressing the UNIX load average method (see equation (<a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#eqn:expdamp">1</a>)) in the same format produces: <a name="eqn:laform"></a><br />load(t) = load(t-1) + EXP_R [ n(t) - load(t-1) ]<br />(3)<br />Eqn.(<a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#eqn:laform">3</a>) is equivalent to (<a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#eqn:smooth">2</a>) if we chose EXP_R = 1 - a. The constant a is called the smoothing constant and can range between 0.0 and 1.0 (in other words, you can think of it as a percentage). EXCEL uses the terminology damping factor for the quantity (1 - a).<br />The value of a determines the percentage by which the current smoothing iteration should for changes in the data that produced the previous smoothing iteration. Larger values of a yield a more rapid response to changes in the data but produce coarser rather than smoother resultant curves (less damped). Conversely, smaller values of a produce very smoother curves but take much longer to compensate for fluctuations in the data (more damped). So, what value of a should be used?<br /><a name="tth_sEc2.2">2.2</a> Critical Damping<br />EXCEL documentation suggests 0.20 to 0.30 are ``reasonable'' values to choose for a. This is a patently misleading statement because it does not take into account how much variation in the data (e.g., error) you are prepared to tolerate.<br />From the analysis in Section <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#sec:recap">1</a> we can now see that EXP_R plays the role of a damping factor in the UNIX load average. The UNIX load average is therefore equivalent to an exponentially-damped moving average. The more usual moving average (of the type often used by financial analysts) is just a simple arithmetic average with over some number of data points.<br />The following Table <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#tab:dampers">1</a> shows the respective smoothing and damping factors that are based on the magic numbers described in Part 1. <a name="tth_tAb1"></a><br /><a name="tab:dampers"></a><br />LA Factor<br />Damping<br />Correction<br />EXP_R<br />1 - aR<br />aR<br />EXP_1<br />0.9200 ( ≈ 92%)<br />0.0800 ( ≈ 8%)<br />EXP_5<br />0.9835 ( ≈ 98%)<br />0.0165 ( ≈ 2%)<br />EXP_15<br />0.9945 ( ≈ 99%)<br />0.0055 ( ≈ 1%) Table 1: UNIX load average damping factors. <br />The value of a is calculated from 1 - exp(-5/60R) where R = 1, 5 or 15. From Table <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#tab:dampers">1</a> we see that the bigger the correction for variation in the data (i.e., aR), the more responsive the result is to those variations and therefore we see less damping (1 - aR) in the output.<br />This is why the 1-minute reports respond more quickly to changes in load than do the 15-minute reports. Note also, that the largest correction for the UNIX load average is about 8% for the 1-minute report and is nowhere near the 20% or 30% suggested by EXCEL.<br /><a name="tth_sEc3">3</a> Other Averages<br />Next, we compare these time-dependent smoothed averages with some of the more familiar forms of averaging used in performance analysis and capacity planning. <a name="sec:ssavg"></a><br /><a name="tth_sEc3.1">3.1</a> Steady-State Averages<br />The most commonly used average used in capacity planning, benchmarking and other kinds of performance modeling, is the steady-state average. <a name="tth_fIg1"></a><a name="fig:daily"></a><br />Figure 1: Load averages represented as a time series. <br />In terms of the UNIX load average, this would correspond to observing the reported loads over a sufficiently long time (T) much as shown in Fig. <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#fig:daily">1</a>.<br />Note that sysadmins almost never use the load average metrics in this way. Part of the reason for that avoidance lies in the fact that the LA metrics are embedded inside other commands (which vary across UNIX platforms) are need to be extracted. <a href="http://www.teamquest.com/solutions-products/products/it-service-analyzer/view/index.htm">TeamQuest View</a> is an excellent example of the way in which such classic limitations in traditional UNIX performance tools have been partially circumvented.<br /><a name="tth_sEc3.2">3.2</a> Example ApplicationTo determine the steady-state average for the above time series we would first need to break up the area under the plot into set of uniform columns of equal width.<br />The width of each column corresponds to uniform time step Δt.<br />The height of each column corresponds to Q(Δt) the instantaneous queue length.<br />The area of each column is given by Q(Δt) * Δt (length * height).<br />The total area under the curve is ΣQ(Δt) * Δt The time-averaged queue length Q (the steady-state value) is then approximated by the fraction:<br />Q =<br />Σ<br />Q(Δt) * Δt<br />T<br />The longer the observation period, the more accurate the steady-state value. Fig. <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#fig:ToyExpSmth">2</a> makes this idea more explicit. It shows a time period where six request become enqueued (the black curve representing approximate columns). <a name="tth_fIg2"></a><a name="fig:ToyExpSmth"></a><br />Figure 2: Toy model with exponential smoothing for the 1-minute load average. <br />Superimposed over the top is the curve that corresponds to the 1-minute load average. <a name="tth_fIg3"></a><br /><a name="fig:ToyLA"></a><br />Figure 3: All three load average curves.<br />Fig. <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#fig:ToyLA">3</a> shows all three load average metrics superimposed as well as the 5-second sample average.<br /><a name="tth_sEc3.3">3.3</a> Little's LawConsider the UNIX Timeshare scheduler. <a name="tth_fIg4"></a><br /><a name="fig:runq"></a><br />Figure 4: Simple model of UNIX scheduler. The schematic in Fig. <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#fig:runq">4</a> depicts the scheduler states according to the usual UNIX conventions:<br />N: processes in the system<br />R: running or runable processes<br />D: uninterruptible processes<br />S: processes in a sleep state Applying steady-state averages of the type defined in Section <a href="http://www.teamquest.com/resources/gunther/display/7/index.htm#sec:ssavg">3.1</a> to other well-known performance metrics, such as:<br />Z: average time spent in sleep state<br />X: average process completion rate<br />S: average execution quantum (in CPU ticks)<br />W: total time spent in both running and runable state allows us to express some very powerful relationships between them and Q (the steady-state queue length). One such relationship is Little's Law<br />Q = X W which relates the average queue length (Q) to the average throughput (X) and the time (W):<br />W =<br />N<br />X<br />- Z In some sense, Q is the average of the load average. These same kind of averages are used in performance analyzer tools like <a href="http://www.perfdynamics.com/Tools/PDQcode.html" target="_new">Pretty Damn Quick</a> and <a href="http://www.teamquest.com/solutions-products/products/model/index.htm">TeamQuest Model</a>. Note, that such insightful relationships are virtually impossible to recognize without taking steady-state averages. Little's law is a case in point. It had existed as a piece of performance folklore many years prior to 1961 when J. D. Little published his now famous proof of the relationship.<br /><a name="tth_sEc4">4</a> Summary<br />So, what have we learnt from all this? Those three little numbers tucked away innocently in certain UNIX commands are not so trivial after all. The first point is that load in this context refers to run-queue length (i.e., the sum of the number of processes waiting in the run-queue plus the number currently executing). Therefore, the number is absolute (not relative) and thus it can be unbounded; unlike utilization (AKA ``load'' in queueing theory parlence).<br />Moreover, they have to be calculated in the kernel and therefore they must be calculated efficiently. Hence, the use of fixed-point arithmetic and that gives rise to those very strange looking constants in the kernel code. At the end of Part 1 I showed you that the magic number are really just exponential decay and rise constants expressed in fixed-point notation.<br />In Part 2 we found out that these constants are actually there to provide exponential smoothing of the raw instantaneous load values. More formally, the UNIX load average is an exponentially smoothed moving average function. In this way sudden changes can be damped so that they don't contribute significantly to the longer term picture. Finally, we compared the exponentially damped average with the more common type of averages that appear as metrics in benchmarks and performance models.<br />On average, the UNIX load average metrics are certainly not your average average.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-85263008979703648292008-10-12T23:29:00.000-07:002008-10-12T23:30:35.700-07:00How can I install a new Kernel version but keeping my old one installed?When upgrading or installing a new Kernel version from a RPM package, the first thing you have to know is what Kernel to use. See <a href="http://fedoranews.org/alex/tutorial/rpm/10.shtml">this</a> page for more details about discovering the right version.<br />Always try to keep the current Kernel installed when upgrading, so you can test the new installed image an see if you'll get in trouble for some reason, if this happens so you can reboot with the current one. Starting from this point, we can say: Never use the <a href="http://fedoranews.org/alex/tutorial/rpm/1.shtml">"Freshen"</a> or <a href="http://fedoranews.org/alex/tutorial/rpm/1.shtml">"Upgrade"</a> commands unless you really knows what will going to happen. When upgrading critical packages like Kernel, try to use the <a href="http://fedoranews.org/alex/tutorial/rpm/8.shtml">Test</a> option before executing the final command. And always try to install, not upgrade.<br />And finally, when upgrading, use the Backup option "--repackage" so you can reinstall the old package you've removed during upgrade process. You can issue the command below to upgrade the Kernel and making a backup of the current and installed Kernel package:<br /><strong>rpm -Uvh --repackage new-kernel.rpm</strong><br />The Old RPM will be generated on "_repackage_dir" RPM macro, usually /var/spool/repackage, check <a href="http://fedoranews.org/alex/tutorial/rpm/15.shtml">this</a> for more details on macros. If some problem occurs with the new installed Kernel version, then use the following command to reinstall the old one:<br /><strong>rpm -ivh --oldpackage /var/spool/repackage/old-kernel.rpm</strong>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-166169283598187932008-09-20T22:35:00.000-07:002008-09-20T22:57:48.596-07:00Apache Advanced Configuration wtih POP3 Protocol<span class="txtplain1"><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;">Because of Apache’s modular nature, it is possible to serve multiple protocols from one software process.<span style=""> </span>This means that you could conceivably use the Apache core to serve HTTP, POP3, SMTP, and more simultaneously.<span style=""> </span>One thing to keep in mind when attempting this is that normally, you cannot serve more than one protocol on a single IP and port combination.<span style=""> </span>This means that if you wish to use Apache to serve both HTTP and POP3 requests, you’ll need to use both ports 80 and 110 (the default HTTP and POP3 ports).<span style=""> </span>This is important to think about, because you will need to configure both hardware and software firewalls accordingly.<span style=""> </span>Usually, you won’t have to worry about conflicting ports; however, most well known protocols have a default or well known port that is defined so as to not conflict with any other protocols.</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"><span style=""> </span></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;">Using the example of the POP3 module that is developed by the Apache Software foundation, we would need to make the following configuration changes to your Apache server.<span style=""> </span>First, you will need to follow create a subdirectory “httpd-pop3” under the “modules” directory in the Apache home directory.<span style=""> </span>You will then need to re-run the “configure” command with the option “--enable-pop”.<span style=""> </span>This will install the POP3 module into Apache.</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"><span style=""> </span></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;">After you’ve installed the module, you need to correctly configure it in the httpd.conf file.<span style=""> </span>This requires you to create a virtual host similar to the ones described in the previous article.<span style=""> </span>The following configuration block gives an example of this:</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"><br /></span></p></span><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"> VirtualHost 123.234.345.111:110></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span><span style=""> </span>Pop3Protocol On</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span><span style=""> </span>Pop3Maildrops /www/mail/pop3</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;">< /VirtualHost></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><br /></span></p><span class="txtplain1"><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;">This configuration creates a virtual host listening to a specific IP on port 110, the Internet-standard port for POP3.<span style=""> </span>The internal lines tell Apache to turn on the POP3 protocol on this virtual host, as well as define the directory where the server should look for the mail files.<span style=""> </span>However, this configuration alone does not suffice, as POP3 requires an authenticated user to make sense, since you need to make sure users only have access to their own email and that hackers don’t have access to any of it.<span style=""> </span>To configure this, you need to use the following “Directory” block configuration:</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"> <o:p></o:p></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"><span style=""> </span></span><span style="font-family:courier new,courier,mono;"><directory></directory></span></p><br /></span><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><<br /></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;">Directory /www/mail/pop3></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span><span style=""> </span>AuthUserFile /www/auth/pop3.users</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span><span style=""> </span>AuthName Pop3Auth</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span><span style=""> </span>AuthType Basic</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span><span style=""> </span>Require valid-user</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;">< /Directory ></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><br /></span></p><br /><span class="txtplain1"><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:courier new,courier,mono;"><span style=""> </span></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"> <o:p></o:p></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;">Notice that this configuration sets up security for the same directory as is defined in the “Pop3Maildrops” line in the “VirtualHost” block above.<span style=""> </span>Otherwise, this acts as any other authentication definition as described in the previous article; it defines where the user file is to be found, the name transmitted to the client for authentication against, what type of authentication to use, and finally requires that a user be validated before they are granted access to the contents of the directory.</span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;"><span style=""> </span></span></p><p class="MsoNormal" style="margin: 0pt;"><span style="font-family:Verdana,Arial,Helvetica,sans-serif;">Basically, setting up and configuring this protocol takes advantage of some of the techniques described in the “Intermediate” article and describes how they work together with a Protocol Module to allow even more flexibility to the Apache server. </span></p></span>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com1tag:blogger.com,1999:blog-4432468942521109730.post-24556903597015511062008-06-30T22:17:00.000-07:002008-06-30T22:33:00.999-07:00Configuring Yum in RHEL5In this article I am going to discuss how we can configure Yum for DVD sources in RHEL5.<br />Yum is the package management tool used these days. It has replaced the old "up2date" command which use to come with RHEL4. This command used to get updates from the RHN (Redhat Network) for the installed operating system, if the user using that command had bought a support/update entitlement from Redhat. But with the new version of Redhat and then it's free clone Centos5 "up2date" has been dropped out and instead of it "yum" as been included. "yum" was there in Fedora core for a long time and was use to update packages via 3rd party repositories. It started becoming mature with Fedora and now finally when Redhat thought it to be matured enough to make it's way into RHEL it's here.<br /><br />The major problem one face with Yum is to configure it for DVD/CD sources. Yum by default doesn't comes enabled for these sources and we need to explicitly enable it. I don't know what is the reason behind not enabling Yum for these sources by default but, whatever it is we can still hack "yum" on our own and can configure it to use DVD/CD install sources.<br /><br />Before starting I would like to mention that I am using a DVD source in this article which is represented by "/dev/dvd" and mounted on "/media/cdrom". The steps I tell here can be easily extended for CD sources as well. Later in this article I will tell how we can configure a local yum repository and use it for package management in our LAN clients.<br /><br />First of all you have to put in the media CD/DVD into your CD/DVD ROM/Writer. Then you need to mount it manually if you are login via root user in a GUI. To do so<br /><br /><strong>mount /dev/dvd /media/cdrom</strong><br /><br />After mounting the DVD we need to copy the content of the DVD onto a directory. For example I have a directory /dvd/rhel5/. I will copy the whole contents of /media/cdrom into /rhel5. This is the command<br /><br /><strong>cp -r /media/cdrom/* /dvd/rhel5/</strong><br /><br />After copying the contents it's time to do some modifications. First of all we need to bring the xml files defining the groups to directory one level higher.<br /><br /><strong>mv /dvd/rhel5/Server/repodata/comps-rhel5-server-core.xml /dvd/rhel5/Server<br />mv /dvd/rhel5/VT/repodata/comps-rhel5-vt.xml /dvd/rhel5/VT<br />mv /dvd/rhel5/Cluster/repodata/comps-rhel5-cluster.xml /dvd/rhel5/Cluster<br />mv /dvd/rhel5/ClusterStorage/repodata/comps-rhel5-cluster.xml /dvd/rhel5/ClusterStorage</strong><br /><br />Now we need to delete the repodata/ directories which comes with the default install tree. The reason behind this is that in their xml files we have a string<br /><location>xml:base="media://1170972069.396645#1" ..... </location><br />This string is present in repmod.xml as well as primary.xml.gz. This thing creates problem with using DVD/CD sources with yum. So we need to do the following<br /><br /><strong>rm -rf /dvd/rhel5/Server/repodata<br />rm -rf /dvd/rhel5/VT/repodata<br />rm -rf /dvd/rhel5/Cluster/repodata<br />rm -rf /dvd/rhel5/ClusterStorage/repodata<br /></strong><br />After we have deleted the default repodata/ directories it's time to re-create them using the "createrepo" command. Now this command doesn't comes by default I guess so we need to install it's rpm<br /><br /><strong>rpm -ivh /dvd/rhel5/Server/createrepo-0.4.4-2.fc6.noarch.rpm<br /></strong><br />Next step is to run this command. Before running this command we need to switch to the /dvd/ directory. Then run the commands listed below<br /><br /><strong>createrepo -g comps-rhel5-server-core.xml dvd/Server/<br />createrepo -g comps-rhel5-vt.xml dvd/VT/<br />createrepo -g comps-rhel5-cluster.xml dvd/Cluster/<br />createrepo -g comps-rhel5-cluster-st.xml dvd/ClusterStorage/<br /></strong><br />The above commands will do most part of the job. Now it's time to configure the /etc/yum.conf for our local repository. Note that we can also create separate repo files in /etc/yum.repos.d/ directory but I have tried it without any luck. So do the following<br /><br /><strong>vi /etc/yum.conf</strong><br /><br />In this file type in the following: # PUT YOUR REPOS HERE OR IN separate files named file.repo<br /><br /># in /etc/yum.repos.d<br />[Server]<br />name=Server<br />baseurl=file:///dvd/rhel5/Server/<br />enabled=1<br />[VT]<br />name=Virtualization<br />baseurl=file:///dvd/rhel5/VT/<br />enabled=1<br />[Cluster]<br />name=Cluster<br />baseurl=file:///dvd/rhel5/Cluster/<br />enabled=1<br />[ClusterStorage]<br />name=Cluster Storage<br />baseurl=file:///dvd/rhel5/ClusterStorage/<br />enabled=1<br /><br />We can also use GPG key signing. For that write on top of the above lines<br /><br />gpgenabled=1<br />gpgkey=file:///dvd/rhel5/RPM-GPG-KEY-fedora file:///dvd/rhel5/RPM-GPG-KEY-fedora-test file:///dvd/rhel5/RPM-GPG-KEY-redhat-auxiliary file:///dvd/rhel5/RPM-GPG-KEY-redhat-beta file:///dvd/rhel5/RPM-GPG-KEY-redhat-former <a href="file:///dvd/rhel5/RPM-GPG-KEY-redhat-release">file:///dvd/rhel5/RPM-GPG-KEY-redhat-release</a><br /><br />This will be sufficient for now. Let's create the yum cache now.<br /><br /><strong>yum clean all<br />yum update</strong><br /><br />It's all done now. We can now use "yum" command to install/remove/query packages and now yum will be using the local yum repository. Well I am mentioning some of the basic "yum" commands which will do the job for you for more options to the "yum" command see the man page of "yum".<br />yum install package_name Description: Installs the given package<br />yum list Description: List's all available package in the yum database<br />yum search package_name Description: Search for a particular package in the database and if found print's a brief info about it.<br />yum remove package_name Description: Remove's a package.<br /><br />Now I will mention the steps you can use to extend this local repository to become a local http based repository so that LAN clients can use it for package management. I will be using Apache to configure this repository as it's the best available software for this job. Do configure the repository for http access via LAN clients we need to make it available to them. For that I am declaring a virtualhost entry in apache's configuration file. This is how it looks for me<br /><br /><virtualhost><br /><em>ServerAdmin webmaster@server.example.com<br />ServerName server.example.com<br />DocumentRoot "/dvd/rhel5/"<br />ErrorLog logs/server.example.com-error_log<br />CustomLog logs/server.example.com-access_log common</em><br /></virtualhost><br /><br /><strong>After this service httpd start<br />chkconfig httpd on<br /></strong><br />Now it's time to make a yum.conf file that we will use at the client end. I am writing my yum.conf for clients. You can use it and modify according to your setup.<br /><br />[main]<br />cachedir=/var/cache/yum<br />keepcache=0<br />debuglevel=2<br />logfile=/var/log/yum.log<br />pkgpolicy=newest<br />distroverpkg=redhat-release<br />tolerant=1<br />exactarch=1<br />obsoletes=1<br />plugins=1<br />metadata_expire=1800<br />gpgcheck=1<br /><br /># PUT YOUR REPOS HERE OR IN separate files named file.repo<br /># in /etc/yum.repos.d<br />[Server]<br />name=Server<br />baseurl=http://192.168.1.5/Server/<br />enabled=1<br />[VT]<br />name=Virtualization<br />baseurl=http://192.168.1.5/VT/<br />enabled=1<br />[Cluster]<br />name=Cluster<br />baseurl=http://192.168.1.5/Cluster/<br />enabled=1<br />[ClusterStorage]<br />name=Cluster Storage<br />baseurl=http://192.168.1.5/ClusterStorage/<br />enabled=1<br /><br />Copy this file to /etc/ directory of the client end and replace it with the original file there. After copying is done it's time to do this<br /><br /><strong>yum clean all<br />yum update<br />rpm --import /etc/pki/rpm-gpg/*<br /></strong><br />Now you can use yum on the client end to just install any package and it will communicate with the local repo server to get the package for you. You can also use pirut in the same way to get things done.<br /><br />So this is how we can configure Yum for RHEL5 Server and can also use it to create our own local repo server for the LAN. Hope you have enjoyed reading and had your hands done on this :).Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com15tag:blogger.com,1999:blog-4432468942521109730.post-77711897128649469362007-10-10T02:22:00.000-07:002007-10-10T02:39:00.741-07:00Implementing NFS<span style=";font-family:Verdana,Arial,Helvetica;font-size:85%;" ><p style="font-family:times new roman;"><span style="font-size:130%;">NFS client and server support is actually built into the Linux kernel. The NFS server application is named <code>rpc.nfsd</code> and the client is <code>rpc.mountd</code>. There is also a quota support application named <code>rps.rquotad</code>. These NFS deamons are normally started at boot time from the script <code>/etc/rc.d/init.d/nfs<code>. Most Linux implementations include this NFS support by default. </code></code></span></p><p style="font-family:times new roman;"><span style="font-size:130%;">The NFS script only operates if the <code>/etc/exports</code> file exists and is not empty (zero length). The <code>/etc/exports</code></span></p><h2 style="font-weight: bold;"><span style=";font-family:Verdana,Arial,Helvetica;font-size:180%;" >NFS Server Support</span></h2> <p style="font-family:times new roman;"><span style="font-size:130%;">Dynamic sharing of directories is done by <code>rpc.nfsd</code> using the <code>exportfs</code> program that changes the <code>/etc/exports</code> file. The following is an example using exportfs: </span></p><pre style="font-family:times new roman;"><span style="font-size:130%;">exportfs clientDomainName:/a/path/name/on/the/server<br />exportfs -o rw :/a/path/name/on/the/server</span></pre> <p style="font-family:times new roman;"><span style="font-size:130%;">The first exports the directory <code>/a/path/name/on/the/server</code> to a specified client. In this case the domain name is <code>clientDomainName<i>*.foo.com</i>. This could also be an IP address or an IP address and subnet mask. NIS group names can also be used. The directory is exported as read-only when no options specified. </code></span></p><p style="font-family:times new roman;"><span style="font-size:130%;">The second instance of <code>exportfs</code> exports the same directory but allows the world to access it. The exportfs supports a number of options. In this case, the command allows read-write access. </span></p><p style="font-family:times new roman;"><span style="font-size:130%;">The <code>exportfs</code> program is also used to remove an export. This is done using the <i>-u</i> option as shown below: </span></p><pre style="font-family:times new roman;"><span style="font-size:130%;">exportfs -u client DomainName:/a/path/name/on/the/server</span></pre> <p style="font-family:times new roman;"><span style="font-size:130%;">The <code>/etc/exports</code> file is used to define exported NFS directories when NFS is started. Each line in the file defines the directory to be exported and how the directory can be accessed. The following is a sample <code>/etc/exports</code> file: </span></p><pre style="font-family:times new roman;"><span style="font-size:130%;">/home/guest (ro)<br />/pub *.local.dom(rw) (ro)</span></pre> <p style="font-family:times new roman;"><span style="font-size:130%;">The first allows any user read-only access to the /home/guest directory. The second allows read-write access to computers with a domain name of local.dom and read-only access to everyone else.</span></p><p>-----------------------------------------------------------------------------------------------</p></span><p> The following methods can be used to specify host names: </p><ul><li><p><i class="EMPHASIS">single host</i> — Where one particular host is specified with a fully qualified domain name, hostname, or IP address. </p></li><li><p><i class="EMPHASIS">wildcards</i> — Where a <tt class="COMMAND">*</tt> or <tt class="COMMAND">?</tt> character is used to take into account a grouping of fully qualified domain names that match a particular string of letters. Wildcards should not be used with IP addresses; however, it is possible for them to work accidentally if reverse DNS lookups fail. </p><p>Be careful when using wildcards with fully qualified domain names, as they tend to be more exact than expected. For example, the use of <tt class="COMMAND">*.example.com</tt> as a wildcard allows sales.example.com to access an exported file system, but not bob.sales.example.com. To match both possibilities both <tt class="COMMAND">*.example.com</tt> and <tt class="COMMAND">*.*.example.com</tt> must be specified. </p></li><li><p><i class="EMPHASIS">IP networks</i> — Allows the matching of hosts based on their IP addresses within a larger network. For example, <tt class="COMMAND">192.168.0.0/28</tt> allows the first 16 IP addresses, from 192.168.0.0 to 192.168.0.15, to access the exported file system, but not 192.168.0.16 and higher. </p></li><li><p><i class="EMPHASIS">netgroups</i> — Permits an NIS netgroup name, written as <tt class="COMMAND">@<var class="REPLACEABLE"><group-name></group-name></var></tt>, to be used. This effectively puts the NIS server in charge of access control for this exported file system, where users can be added and removed from an NIS group without affecting <tt class="FILENAME">/etc/exports</tt>. </p></li></ul>--------------------------------------------------------------------------------------------<br /><br /> <span style="font-weight: bold;"><span style="font-size:180%;">NFS export Options :</span><br /><br /></span><ul><li><p><tt class="OPTION">ro</tt> — Mounts of the exported file system are read-only. Remote hosts are not able to make changes to the data shared on the file system. To allow hosts to make changes to the file system, the read/write (<tt class="OPTION">rw</tt>) option must be specified. </p></li><li><p><tt class="OPTION">wdelay</tt> — Causes the NFS server to delay writing to the disk if it suspects another write request is imminent. This can improve performance by reducing the number of times the disk must be accessed by separate write commands, reducing write overhead. The <tt class="OPTION">no_wdelay</tt> option turns off this feature, but is only available when using the <tt class="OPTION">sync</tt> option. </p></li><li><p><tt class="OPTION">root_squash</tt> — Prevents root users connected remotely from having root privileges and assigns them the user ID for the user <samp class="COMPUTEROUTPUT">nfsnobody</samp>. This effectively "squashes" the power of the remote root user to the lowest local user, preventing unauthorized alteration of files on the remote server. Alternatively, the <tt class="OPTION">no_root_squash</tt> option turns off root squashing. To squash every remote user, including root, use the <tt class="OPTION">all_squash</tt> option. To specify the user and group IDs to use with remote users from a particular host, use the <tt class="OPTION">anonuid</tt> and <tt class="OPTION">anongid</tt> options, respectively. In this case, a special user account can be created for remote NFS users to share and specify <tt class="OPTION">(anonuid=<var class="REPLACEABLE"><uid-value></uid-value></var>,anongid=<var class="REPLACEABLE"><gid-value></gid-value></var>)</tt>, where <tt class="OPTION"><var class="REPLACEABLE"><uid-value></uid-value></var></tt> is the user ID number and <tt class="OPTION"><var class="REPLACEABLE"><gid-value></gid-value></var></tt> is the group ID number. </p></li></ul><br /><h2 class="SECT2"><span style="font-size:180%;"><a name="S1-NFS-SERVER-CONFIG-EXPORTFS">The <tt class="COMMAND">exportfs</tt> Command</a></span></h2><p> Every file system being exported to remote users via NFS, as well as the access level for those file systems, are listed in the <tt class="FILENAME">/etc/exports</tt> file. When the <tt class="COMMAND">nfs</tt> service starts, the <tt class="COMMAND">/usr/sbin/exportfs</tt> command launches and reads this file, passes control to <tt class="COMMAND">rpc.mountd</tt> (if NFSv2 or NFSv3) for the actual mounting process, then to <tt class="COMMAND">rpc.nfsd</tt> where the file systems are then available to remote users. </p><p> When issued manually, the <tt class="COMMAND">/usr/sbin/exportfs</tt> command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the <tt class="COMMAND">/usr/sbin/exportfs</tt> command writes the exported file systems to <tt class="FILENAME">/var/lib/nfs/xtab</tt>. Since <tt class="COMMAND">rpc.mountd</tt> refers to the <tt class="FILENAME">xtab</tt> file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately. </p><p> The following is a list of commonly used options available for <tt class="COMMAND">/usr/sbin/exportfs</tt>: </p><ul><li><p><tt class="OPTION">-r</tt> — Causes all directories listed in <tt class="FILENAME">/etc/exports</tt> to be exported by constructing a new export list in <tt class="FILENAME">/etc/lib/nfs/xtab</tt>. This option effectively refreshes the export list with any changes that have been made to <tt class="FILENAME">/etc/exports</tt>. </p></li><li><p><tt class="OPTION">-a</tt> — Causes all directories to be exported or unexported, depending on what other options are passed to <tt class="COMMAND">/usr/sbin/exportfs</tt>. If no other options are specified, <tt class="COMMAND">/usr/sbin/exportfs</tt> exports all file systems specified in <tt class="FILENAME">/etc/exports</tt>. </p></li><li><p><tt class="OPTION">-o <var class="REPLACEABLE">file-systems</var></tt> — Specifies directories to be exported that are not listed in <tt class="FILENAME">/etc/exports</tt>. Replace <var class="REPLACEABLE">file-systems</var> with additional file systems to be exported. These file systems must be formatted in the same way they are specified in <tt class="FILENAME">/etc/exports</tt>. Refer to <a href="http://docs.huihoo.com/redhat/rhel-4-docs/rhel-rg-en-4/s1-nfs-server-export.html#S2-NFS-SERVER-CONFIG-EXPORTS">Section 9.3.1 <i>The <tt class="FILENAME">/etc/exports</tt> Configuration File</i></a> for more information on <tt class="FILENAME">/etc/exports</tt> syntax. This option is often used to test an exported file system before adding it permanently to the list of file systems to be exported. </p></li><li><p><tt class="OPTION">-i</tt> — Ignores <tt class="FILENAME">/etc/exports</tt>; only options given from the command line are used to define exported file systems. </p></li><li><p><tt class="OPTION">-u</tt> — Unexports all shared directories. The command <tt class="COMMAND">/usr/sbin/exportfs -ua</tt> suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS sharing, type <tt class="COMMAND">exportfs -r</tt>. </p></li><li><p><tt class="OPTION">-v</tt> — Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the <tt class="COMMAND">exportfs</tt> command is executed. </p></li></ul><p> If no options are passed to the <tt class="COMMAND">/usr/sbin/exportfs</tt> command, it displays a list of currently exported file systems. </p><p> For more information about the <tt class="COMMAND">/usr/sbin/exportfs</tt> command, refer to the <tt class="COMMAND">exportfs</tt> man page. </p><br /><div class="SECT2"><br /></div>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-83584870278441309562007-10-10T00:41:00.000-07:002007-10-10T00:42:10.796-07:00NIS - Client and Server Configuration<div class="post"> <a name="111868473814704223"></a> <h3 class="post-title"> <a href="http://linuxhelp.blogspot.com/2005/06/nis-client-and-server-configuration.html"><br /></a> </h3> <div class="post-body"> <div class="KonaBody"> <div style="text-align: justify;"><span style="font-size: 180%;">N</span>etwork <span style="font-weight: bold;">I</span>nformation <span style="font-weight: bold;">S</span>ervice (<span style="font-weight: bold;">NIS</span>) is the traditional <a id="KonaLink0" target="_top" class="kLink" style="text-decoration: underline ! important; position: static;" href="http://linuxhelp.blogspot.com/2005/06/nis-client-and-server-configuration.html#"><span style="color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static;color:#449d34;" ><span class="kLink" style="border-bottom: 1px solid rgb(68, 157, 52); color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static; padding-bottom: 1px; background-color: transparent;">directory </span><span class="kLink" style="border-bottom: 1px solid rgb(68, 157, 52); color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static; padding-bottom: 1px; background-color: transparent;">service</span></span></a> on *nix platforms. The setup of NIS is relatively simple when compared to other directory services like LDAP. NIS stores administrative files like <span style="font-family: courier new;">/etc/passwd</span>, <span style="font-family: courier new;">/etc/hosts</span> and so on in Berkeley DB files. This data is made available over the network to all the clients that are connected to the NIS domain.<br /></div><br /><div style="text-align: justify;"><span style="color: rgb(204, 0, 0);">Drawback :</span> The <a id="KonaLink1" target="_top" class="kLink" style="text-decoration: underline ! important; position: static;" href="http://linuxhelp.blogspot.com/2005/06/nis-client-and-server-configuration.html#"><span style="color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static;color:#449d34;" ><span class="kLink" style="border-bottom: 1px solid rgb(68, 157, 52); color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static; padding-bottom: 1px; background-color: transparent;">network </span><span class="kLink" style="border-bottom: 1px solid rgb(68, 157, 52); color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static; padding-bottom: 1px; background-color: transparent;">connection</span></span></a> is not encrypted and all transactions - including passwords - are sent in clear text.<br /></div><br /><span style="color: rgb(0, 0, 153); font-size: 130%;">Configuring an NIS <a id="KonaLink2" target="_top" class="kLink" style="text-decoration: underline ! important; position: static;" href="http://linuxhelp.blogspot.com/2005/06/nis-client-and-server-configuration.html#"><span style="color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 16.8667px; position: static;color:#449d34;" ><span class="kLink" style="border-bottom: 1px solid rgb(68, 157, 52); color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 16.8667px; position: static; padding-bottom: 1px; background-color: transparent;">Server</span></span></a></span><br /><ul><li>Make sure the following packages are installed in your machine:<br /><div style="text-align: justify;"><span style="font-weight: bold; font-family: courier new;"></span></div><blockquote><div style="text-align: justify;"><span style="font-weight: bold; font-family: courier new;">ypserv</span> : Contains the NIS server daemon (<span style="font-family: courier new;">ypserv</span>) and the NIS password daemon (<span style="font-family: courier new;">yppasswdd</span>).<br /></div><span style="font-weight: bold; font-family: courier new;">portmap</span> : mandatory</blockquote><div style="text-align: justify;">The <span style="font-family: courier new;">yppasswdd</span> daemon enables the NIS server to change the NIS <a id="KonaLink3" target="_top" class="kLink" style="text-decoration: underline ! important; position: static;" href="http://linuxhelp.blogspot.com/2005/06/nis-client-and-server-configuration.html#"><span style="color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static;color:#449d34;" ><span class="kLink" style="border-bottom: 1px solid rgb(68, 157, 52); color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static; padding-bottom: 1px; background-color: transparent;">database</span></span></a> and password database information, at the client's request. In order to change your NIS password, the <span style="font-family: courier new;">yppasswdd</span> daemon must be running on the master server. From the client, one must use <span style="font-weight: bold; font-family: courier new;">yppasswd</span> to update a password within the NIS <a id="KonaLink4" target="_top" class="kLink" style="text-decoration: underline ! important; position: static;" href="http://linuxhelp.blogspot.com/2005/06/nis-client-and-server-configuration.html#"><span style="color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static;color:#449d34;" ><span class="kLink" style="color: rgb(68, 157, 52) ! important; font-family: Georgia,Serif; font-weight: 400; font-size: 13px; position: static;">domain</span></span></a>.<br /></div></li><br /><li>Insert the following line in the /etc/sysconfig/network file:<br /><blockquote style="font-weight: bold;"><code>NISDOMAIN=mynisdomain</code></blockquote></li><br /><li><div style="text-align: justify;">Specify the networks you wish NIS to recognize in <span style="font-weight: bold; font-family: courier new;">/var/yp/securenets</span> .<br /></div>Eg:<br /><blockquote><code># Permit access to localhost:<br /><span style="font-weight: bold;">host 127.0.0.1</span><br /><br />#Permit access to xyz.com network:<br /><span style="font-weight: bold;">255.255.255.0 192.168.0.0</span></code></blockquote></li><br /><li>Insert the following lines in the <span style="font-weight: bold; font-family: courier new;">/var/yp/Makefile</span> :<br /><blockquote><code><span style="font-weight: bold;">NOPUSH=true</span> # Only if you have only a master NIS server else if you have even one slave server, set it to <span style="font-style: italic;">false</span><br /><span style="font-weight: bold;">MERGE_GROUP=false</span> # If you have any group passwords in /etc/gshadow that need to be merged into the NIS group map, set it to true.<br /><span style="font-weight: bold;">MERGE_PASSWD=false</span> # Set to true if you want to merge encrypted passwords from /etc/shadow into the NIS passwd map.</code></blockquote><br />Uncomment the following line :<br /><blockquote><code><span style="font-weight: bold;">all: passwd group hosts netid</span> ... </code></blockquote></li><br /><li>If you have slave NIS servers then enter their names in <span style="font-weight: bold; font-family: courier new;">/var/yp/ypservers</span> .</li><br /><li>Finally run the following command:<br /><blockquote><code># <span style="font-weight: bold;">/usr/lib/yp/ypinit -m</span></code></blockquote></li></ul><span style="color: rgb(0, 0, 153); font-size: 130%;">Configuring a <span style="font-style: italic;">slave</span><span style="font-weight: bold; font-style: italic;"> </span>NIS server</span><br /><ul><li>Install <span style="font-family: courier new;">ypserv</span> package on the slave server.<br /></li><li style="text-align: justify;">Make sure you have the name of the slave server listed in <span style="font-family: courier new;">/var/yp/ypservers</span> on the master server.</li><li>Now issue the command :<br /><blockquote><code># <span style="font-weight: bold;">/usr/lib/yp/ypinit -s masterserver</span></code></blockquote></li><li style="text-align: justify;">Make sure the <span style="font-family: courier new;">NOPUSH</span> value in the <span style="font-family: courier new;">/var/yp/Makefile</span> on the master server is set to "<span style="font-family: courier new;">false</span>". Then when the master server's databases are updated, a call to the <span style="font-family: courier new;">yppush</span> executable will be made. <span style="font-family: courier new;">yppush</span> is responsible for transferring the updated contents from the master to the slaves. Only transfers within the same domain are made with <span style="font-family: courier new;">yppush</span>.</li><li>Lastly start <span style="font-family: courier new;">ypserv</span> and <span style="font-family: courier new;">yppasswdd</span> daemons<br /><blockquote><code># <span style="font-weight: bold;">service ypserv start</span><br /># <span style="font-weight: bold;">service yppasswdd start</span></code></blockquote></li></ul><span style="color: rgb(0, 0, 153); font-size: 130%;">Configuring an NIS client</span><br /><ul><li>Make sure the following packages are installed on your machine:<br /><span style="font-weight: bold; font-family: courier new;">ypbind</span> - NIS client daemon<br /><span style="font-weight: bold; font-family: courier new;">authconfig</span> - used for automatic configuration of NIS client.<br /><div style="text-align: justify;"><span style="font-weight: bold; font-family: courier new;">yp-tools</span>: Contains utilities like <span style="font-family: courier new;">ypcat</span>, <span style="font-family: courier new;">yppasswd</span>, <span style="font-family: courier new;">ypwhich</span> and so on used for viewing and modifying the user account details within the NIS server.<br /></div><span style="font-weight: bold; font-family: courier new;">portmap</span> (mandatory)</li><li>There are two methods to configure an NIS client.<br /><ul><li><span style="color: rgb(204, 0, 0);">Method 1</span>: Manual method<br /><ul><li>Enter the following line in the <span style="font-weight: bold; font-family: courier new;">/etc/sysconfig/network</span> file:<br /><blockquote style="font-weight: bold;"><code>NISDOMAIN=mynisdomain</code></blockquote></li><li>Append the following line in <span style="font-weight: bold; font-family: courier new;">/etc/yp.conf</span> :<br /><blockquote><code><span style="font-weight: bold;">domain mynisdomain server 192.168.0.1</span> # replace this with your NIS server address.</code></blockquote></li><li>Make sure the following lines contain '<span style="font-weight: bold; font-family: courier new;">nis</span>' as an option in the file <span style="font-weight: bold; font-family: courier new;">/etc/nsswitch.conf</span> file:<br /><blockquote><code>passwd: files <span style="font-weight: bold;">nis</span><br />shadow: files <span style="font-weight: bold;">nis</span><br />group: files <span style="font-weight: bold;">nis</span><br />hosts: files <span style="font-weight: bold;">nis</span> dns<br />networks: files <span style="font-weight: bold;">nis</span><br />protocols: files <span style="font-weight: bold;">nis</span><br /><span style="font-weight: bold;">publickey: nisplus</span><br />automount: files <span style="font-weight: bold;">nis</span><br />netgroup: files <span style="font-weight: bold;">nis</span><br />aliases: files <span style="font-weight: bold;">nisplus</span></code></blockquote></li><li>Finally restart <span style="font-family: courier new;">ypbind</span> and <span style="font-family: courier new;">portmap</span>.<br /></li></ul></li><li><span style="color: rgb(204, 0, 0);">Method 2</span>: Run <span style="font-weight: bold; font-family: courier new;">authconfig</span> and follow directions.</li></ul></li><li>To check if you have succesfully configured NIS client, execute the following :<br /><blockquote><code># <span style="font-weight: bold;">ypcat passwd</span></code></blockquote><div style="text-align: justify;">The output will be the contents of the <span style="font-family: courier new;">/etc/passwd</span> file residing on the NIS server having user IDs greater than or equal to 500.</div></li></ul> </div></div></div>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-44309504932687829432007-09-25T01:59:00.000-07:002007-09-25T02:03:59.634-07:00Linux Ethernet Bonding<p style="font-family: times new roman;"><b><span style="font-size: 12pt;">What is bonding?</span></b><br />Bonding is the same as port trunking. In the following I will use the word bonding because practically we will bond interfaces as one.</p> <p style="font-family: times new roman;"><b><span style="font-size: 12pt;">But still...what is bonding?</span></b><br />Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports (1 mb each) into a three-megabits trunk port. That is equivalent with having one interface with three megabits speed.</p> <p style="font-family: times new roman;"><b><span style="font-size: 12pt;">Where should I use bonding?</span></b><br />You can use it wherever you need redundant links, fault tolerance or load balancing networks. It is the best way to have a high availability network segment. A very useful way to use bonding is to use it in connection with 802.1q VLAN support (your network equipment must have 802.1q protocol implemented).</p><br /><pre style="font-family: times new roman;">Note :<br />------<br />The bonding driver originally came from Donald Becker's beowulf patches for<br />kernel 2.0. It has changed quite a bit since, and the original tools from<br />extreme-linux and beowulf sites will not work with this version of the driver.<br /><br />For new versions of the driver, patches for older kernels and the updated<br />userspace tools, please follow the links at the end of this file.<br /><br />Installation<br />============<br /><br />1) Build kernel with the bonding driver<br />---------------------------------------<br />For the latest version of the bonding driver, use kernel 2.4.12 or above<br />(otherwise you will need to apply a patch).<br /><br />Configure kernel with `make menuconfig/xconfig/config', and select<br />"Bonding driver support" in the "Network device support" section. It is<br />recommended to configure the driver as module since it is currently the only way<br />to pass parameters to the driver and configure more than one bonding device.<br /><br />Build and install the new kernel and modules.<br /><br />2) Get and install the userspace tools<br />--------------------------------------<br />This version of the bonding driver requires updated ifenslave program. The<br />original one from extreme-linux and beowulf will not work. Kernels 2.4.12<br />and above include the updated version of ifenslave.c in Documentation/network<br />directory. For older kernels, please follow the links at the end of this file.<br /><br />IMPORTANT!!! If you are running on Redhat 7.1 or greater, you need<br />to be careful because /usr/include/linux is no longer a symbolic link<br />to /usr/src/linux/include/linux. If you build ifenslave while this is<br />true, ifenslave will appear to succeed but your bond won't work. The purpose<br />of the -I option on the ifenslave compile line is to make sure it uses<br />/usr/src/linux/include/linux/if_bonding.h instead of the version from<br />/usr/include/linux.<br /><br />To install ifenslave.c, do:<br /> # gcc -Wall -Wstrict-prototypes -O -I/usr/src/linux/include ifenslave.c -o ifenslave<br /> # cp ifenslave /sbin/ifenslave<br /><br />3) Configure your system<br />------------------------<br />Also see the following section on the module parameters. You will need to add<br />at least the following line to /etc/conf.modules (or /etc/modules.conf):<br /><br /> alias bond0 bonding<br /><br />Use standard distribution techniques to define bond0 network interface. For<br />example, on modern RedHat distributions, create ifcfg-bond0 file in<br />/etc/sysconfig/network-scripts directory that looks like this:<br /><br />DEVICE=bond0<br />IPADDR=192.168.1.1<br />NETMASK=255.255.255.0<br />NETWORK=192.168.1.0<br />BROADCAST=192.168.1.255<br />ONBOOT=yes<br />BOOTPROTO=none<br />USERCTL=no<br /><br />(put the appropriate values for you network instead of 192.168.1).<br /><br />All interfaces that are part of the trunk, should have SLAVE and MASTER<br />definitions. For example, in the case of RedHat, if you wish to make eth0 and<br />eth1 (or other interfaces) a part of the bonding interface bond0, their config<br />files (ifcfg-eth0, ifcfg-eth1, etc.) should look like this:<br /><br />DEVICE=eth0<br />USERCTL=no<br />ONBOOT=yes<br />MASTER=bond0<br />SLAVE=yes<br />BOOTPROTO=none<br /><br />(use DEVICE=eth1 for eth1 and MASTER=bond1 for bond1 if you have configured<br />second bonding interface).<br /><br />Restart the networking subsystem or just bring up the bonding device if your<br />administration tools allow it. Otherwise, reboot. (For the case of RedHat<br />distros, you can do `ifup bond0' or `/etc/rc.d/init.d/network restart'.)<br /><br />If the administration tools of your distribution do not support master/slave<br />notation in configuration of network interfaces, you will need to configure<br />the bonding device with the following commands manually:<br /><br /> # /sbin/ifconfig bond0 192.168.1.1 up<br /> # /sbin/ifenslave bond0 eth0<br /> # /sbin/ifenslave bond0 eth1<br /><br />(substitute 192.168.1.1 with your IP address and add custom network and custom<br />netmask to the arguments of ifconfig if required).<br /><br />You can then create a script with these commands and put it into the appropriate<br />rc directory.<br /><br />If you specifically need that all your network drivers are loaded before the<br />bonding driver, use one of modutils' powerful features : in your modules.conf,<br />tell that when asked for bond0, modprobe should first load all your interfaces :<br /><br />probeall bond0 eth0 eth1 bonding<br /><br />Be careful not to reference bond0 itself at the end of the line, or modprobe will<br />die in an endless recursive loop.<br /><br />4) Module parameters.<br />---------------------<br />The following module parameters can be passed:<br /><br /> mode=<br /><br />Possible values are 0 (round robin policy, default) and 1 (active backup<br />policy), and 2 (XOR). See question 9 and the HA section for additional info.<br /><br /> miimon=<br /><br />Use integer value for the frequency (in ms) of MII link monitoring. Zero value<br />is default and means the link monitoring will be disabled. A good value is 100<br />if you wish to use link monitoring. See HA section for additional info.<br /><br /> downdelay=<br /><br />Use integer value for delaying disabling a link by this number (in ms) after<br />the link failure has been detected. Must be a multiple of miimon. Default<br />value is zero. See HA section for additional info.<br /><br /> updelay=<br /><br />Use integer value for delaying enabling a link by this number (in ms) after<br />the "link up" status has been detected. Must be a multiple of miimon. Default<br />value is zero. See HA section for additional info.<br /><br /> arp_interval=<br /><br />Use integer value for the frequency (in ms) of arp monitoring. Zero value<br />is default and means the arp monitoring will be disabled. See HA section<br />for additional info. This field is value in active_backup mode only.<br /><br /> arp_ip_target=<br /><br />An ip address to use when arp_interval is > 0. This is the target of the<br />arp request sent to determine the health of the link to the target. <br />Specify this value in ddd.ddd.ddd.ddd format.<br /><br />If you need to configure several bonding devices, the driver must be loaded<br />several times. I.e. for two bonding devices, your /etc/conf.modules must look<br />like this:<br /><br />alias bond0 bonding<br />alias bond1 bonding<br /><br />options bond0 miimon=100<br />options bond1 -o bonding1 miimon=100<br /><br />5) Testing configuration<br />------------------------<br />You can test the configuration and transmit policy with ifconfig. For example,<br />for round robin policy, you should get something like this:<br /><br />[root]# /sbin/ifconfig<br />bond0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4 <br /> inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0<br /> UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1<br /> RX packets:7224794 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:3286647 errors:1 dropped:0 overruns:1 carrier:0<br /> collisions:0 txqueuelen:0<br /><br />eth0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4 <br /> inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0<br /> UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1<br /> RX packets:3573025 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:1643167 errors:1 dropped:0 overruns:1 carrier:0<br /> collisions:0 txqueuelen:100<br /> Interrupt:10 Base address:0x1080<br /><br />eth1 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4 <br /> inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0<br /> UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1<br /> RX packets:3651769 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:1643480 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:100<br /> Interrupt:9 Base address:0x1400<br /><br />Questions :<br />===========<br /><br />1. Is it SMP safe?<br /><br /> Yes. The old 2.0.xx channel bonding patch was not SMP safe.<br /> The new driver was designed to be SMP safe from the start.<br /><br />2. What type of cards will work with it?<br /><br /> Any Ethernet type cards (you can even mix cards - a Intel<br /> EtherExpress PRO/100 and a 3com 3c905b, for example).<br /> You can even bond together Gigabit Ethernet cards!<br /><br />3. How many bonding devices can I have?<br /><br /> One for each module you load. See section on module parameters for how<br /> to accomplish this.<br /><br />4. How many slaves can a bonding device have?<br /><br /> Limited by the number of network interfaces Linux supports and the<br /> number of cards you can place in your system.<br /><br />5. What happens when a slave link dies?<br /><br /> If your ethernet cards support MII status monitoring and the MII<br /> monitoring has been enabled in the driver (see description of module<br /> parameters), there will be no adverse consequences. This release<br /> of the bonding driver knows how to get the MII information and<br /> enables or disables its slaves according to their link status.<br /> See section on HA for additional information.<br /><br /> For ethernet cards not supporting MII status, or if you wish to<br /> verify that packets have been both send and received, you may<br /> configure the arp_interval and arp_ip_target. If packets have<br /> not been sent or received during this interval, an arp request<br /> is sent to the target to generate send and receive traffic. <br /> If after this interval, either the successful send and/or<br /> receive count has not incremented, the next slave in the sequence<br /> will become the active slave.<br /><br /> If neither mii_monitor and arp_interval is configured, the bonding<br /> driver will not handle this situation very well. The driver will<br /> continue to send packets but some packets will be lost. Retransmits<br /> will cause serious degradation of performance (in the case when one<br /> of two slave links fails, 50% packets will be lost, which is a serious<br /> problem for both TCP and UDP).<br /><br />6. Can bonding be used for High Availability?<br /><br /> Yes, if you use MII monitoring and ALL your cards support MII link<br /> status reporting. See section on HA for more information.<br /><br />7. Which switches/systems does it work with?<br /><br /> In round-robin mode, it works with systems that support trunking:<br /> <br /> * Cisco 5500 series (look for EtherChannel support).<br /> * SunTrunking software.<br /> * Alteon AceDirector switches / WebOS (use Trunks).<br /> * BayStack Switches (trunks must be explicitly configured). Stackable<br /> models (450) can define trunks between ports on different physical<br /> units.<br /> * Linux bonding, of course !<br /> <br /> In Active-backup mode, it should work with any Layer-II switches.<br /><br />8. Where does a bonding device get its MAC address from?<br /><br /> If not explicitly configured with ifconfig, the MAC address of the<br /> bonding device is taken from its first slave device. This MAC address<br /> is then passed to all following slaves and remains persistent (even if<br /> the the first slave is removed) until the bonding device is brought<br /> down or reconfigured.<br /> <br /> If you wish to change the MAC address, you can set it with ifconfig:<br /><br /> # ifconfig bond0 ha ether 00:11:22:33:44:55<br /><br /> The MAC address can be also changed by bringing down/up the device<br /> and then changing its slaves (or their order):<br /> <br /> # ifconfig bond0 down ; modprobe -r bonding<br /> # ifconfig bond0 .... up<br /> # ifenslave bond0 eth...<br /><br /> This method will automatically take the address from the next slave<br /> that will be added.<br /> <br /> To restore your slaves' MAC addresses, you need to detach them<br /> from the bond (`ifenslave -d bond0 eth0'), set them down<br /> (`ifconfig eth0 down'), unload the drivers (`rmmod 3c59x', for<br /> example) and reload them to get the MAC addresses from their<br /> eeproms. If the driver is shared by several devices, you need<br /> to turn them all down. Another solution is to look for the MAC<br /> address at boot time (dmesg or tail /var/log/messages) and to<br /> reset it by hand with ifconfig :<br /><br /> # ifconfig eth0 down<br /> # ifconfig eth0 hw ether 00:20:40:60:80:A0<br /><br />9. Which transmit polices can be used?<br /><br /> Round robin, based on the order of enslaving, the output device<br /> is selected base on the next available slave. Regardless of<br /> the source and/or destination of the packet.<br /><br /> XOR, based on (src hw addr XOR dst hw addr) % slave cnt. This<br /> selects the same slave for each destination hw address.<br /><br /> Active-backup policy that ensures that one and only one device will<br /> transmit at any given moment. Active-backup policy is useful for<br /> implementing high availability solutions using two hubs (see<br /> section on HA).<br /><br />High availability<br />=================<br /><br />To implement high availability using the bonding driver, you need to<br />compile the driver as module because currently it is the only way to pass<br />parameters to the driver. This may change in the future.<br /><br />High availability is achieved by using MII status reporting. You need to<br />verify that all your interfaces support MII link status reporting. On Linux<br />kernel 2.2.17, all the 100 Mbps capable drivers and yellowfin gigabit driver<br />support it. If your system has an interface that does not support MII status<br />reporting, a failure of its link will not be detected!<br /><br />The bonding driver can regularly check all its slaves links by checking the<br />MII status registers. The check interval is specified by the module argument<br />"miimon" (MII monitoring). It takes an integer that represents the<br />checking time in milliseconds. It should not come to close to (1000/HZ)<br />(10 ms on i386) because it may then reduce the system interactivity. 100 ms<br />seems to be a good value. It means that a dead link will be detected at most<br />100 ms after it goes down.<br /><br />Example:<br /><br /> # modprobe bonding miimon=100<br /><br />Or, put in your /etc/modules.conf :<br /><br /> alias bond0 bonding<br /> options bond0 miimon=100<br /><br />There are currently two policies for high availability, depending on whether<br />a) hosts are connected to a single host or switch that support trunking<br />b) hosts are connected to several different switches or a single switch that<br /> does not support trunking.<br /><br />1) HA on a single switch or host - load balancing<br />-------------------------------------------------<br />It is the easiest to set up and to understand. Simply configure the<br />remote equipment (host or switch) to aggregate traffic over several<br />ports (Trunk, EtherChannel, etc.) and configure the bonding interfaces.<br />If the module has been loaded with the proper MII option, it will work<br />automatically. You can then try to remove and restore different links<br />and see in your logs what the driver detects. When testing, you may<br />encounter problems on some buggy switches that disable the trunk for a<br />long time if all ports in a trunk go down. This is not Linux, but really<br />the switch (reboot it to ensure).<br /><br />Example 1 : host to host at double speed<br /><br /> +----------+ +----------+<br /> | |eth0 eth0| |<br /> | Host A +--------------------------+ Host B |<br /> | +--------------------------+ |<br /> | |eth1 eth1| |<br /> +----------+ +----------+<br /><br /> On each host :<br /> # modprobe bonding miimon=100<br /> # ifconfig bond0 addr<br /> # ifenslave bond0 eth0 eth1<br /><br />Example 2 : host to switch at double speed<br /><br /> +----------+ +----------+<br /> | |eth0 port1| |<br /> | Host A +--------------------------+ switch |<br /> | +--------------------------+ |<br /> | |eth1 port2| |<br /> +----------+ +----------+<br /><br /> On host A : On the switch :<br /> # modprobe bonding miimon=100 # set up a trunk on port1<br /> # ifconfig bond0 addr and port2<br /> # ifenslave bond0 eth0 eth1<br /><br />2) HA on two or more switches (or a single switch without trunking support)<br />---------------------------------------------------------------------------<br />This mode is more problematic because it relies on the fact that there<br />are multiple ports and the host's MAC address should be visible on one<br />port only to avoid confusing the switches.<br /><br />If you need to know which interface is the active one, and which ones are<br />backup, use ifconfig. All backup interfaces have the NOARP flag set.<br /><br />To use this mode, pass "mode=1" to the module at load time :<br /><br /> # modprobe bonding miimon=100 mode=1<br /><br />Or, put in your /etc/modules.conf :<br /><br /> alias bond0 bonding<br /> options bond0 miimon=100 mode=1<br /><br />Example 1: Using multiple host and multiple switches to build a "no single<br />point of failure" solution.<br /><br /><br /> | |<br /> |port3 port3|<br /> +-----+----+ +-----+----+<br /> | |port7 ISL port7| |<br /> | switch A +--------------------------+ switch B |<br /> | +--------------------------+ |<br /> | |port8 port8| |<br /> +----++----+ +-----++---+<br /> port2||port1 port1||port2<br /> || +-------+ ||<br /> |+-------------+ host1 +---------------+|<br /> | eth0 +-------+ eth1 |<br /> | |<br /> | +-------+ |<br /> +--------------+ host2 +----------------+<br /> eth0 +-------+ eth1<br /><br />In this configuration, there are an ISL - Inter Switch Link (could be a trunk),<br />several servers (host1, host2 ...) attached to both switches each, and one or<br />more ports to the outside world (port3...). One an only one slave on each host<br />is active at a time, while all links are still monitored (the system can<br />detect a failure of active and backup links).<br /><br />Each time a host changes its active interface, it sticks to the new one until<br />it goes down. In this example, the hosts are not too much affected by the<br />expiration time of the switches' forwarding tables.<br /><br />If host1 and host2 have the same functionality and are used in load balancing<br />by another external mechanism, it is good to have host1's active interface<br />connected to one switch and host2's to the other. Such system will survive<br />a failure of a single host, cable, or switch. The worst thing that may happen<br />in the case of a switch failure is that half of the hosts will be temporarily<br />unreachable until the other switch expires its tables.<br /><br />Example 2: Using multiple ethernet cards connected to a switch to configure<br /> NIC failover (switch is not required to support trunking).<br /><br /><br /> +----------+ +----------+<br /> | |eth0 port1| |<br /> | Host A +--------------------------+ switch |<br /> | +--------------------------+ |<br /> | |eth1 port2| |<br /> +----------+ +----------+<br /><br /> On host A : On the switch :<br /> # modprobe bonding miimon=100 mode=1 # (optional) minimize the time<br /> # ifconfig bond0 addr # for table expiration<br /> # ifenslave bond0 eth0 eth1<br /><br />Each time the host changes its active interface, it sticks to the new one until<br />it goes down. In this example, the host is strongly affected by the expiration<br />time of the switch forwarding table.<br /><br />3) Adapting to your switches' timing<br />------------------------------------<br />If your switches take a long time to go into backup mode, it may be<br />desirable not to activate a backup interface immediately after a link goes<br />down. It is possible to delay the moment at which a link will be<br />completely disabled by passing the module parameter "downdelay" (in<br />milliseconds, must be a multiple of miimon).<br /><br />When a switch reboots, it is possible that its ports report "link up" status<br />before they become usable. This could fool a bond device by causing it to<br />use some ports that are not ready yet. It is possible to delay the moment at<br />which an active link will be reused by passing the module parameter "updelay"<br />(in milliseconds, must be a multiple of miimon).<br /><br />A similar situation can occur when a host re-negotiates a lost link with the<br />switch (a case of cable replacement).<br /><br />A special case is when a bonding interface has lost all slave links. Then the<br />driver will immediately reuse the first link that goes up, even if updelay<br />parameter was specified. (If there are slave interfaces in the "updelay" state,<br />the interface that first went into that state will be immediately reused.) This<br />allows to reduce down-time if the value of updelay has been overestimated.<br /><br />Examples :<br /><br /> # modprobe bonding miimon=100 mode=1 downdelay=2000 updelay=5000<br /> # modprobe bonding miimon=100 mode=0 downdelay=0 updelay=5000<br /><br />4) Limitations<br />--------------<br />The main limitations are :<br /> - only the link status is monitored. If the switch on the other side is<br /> partially down (e.g. doesn't forward anymore, but the link is OK), the link<br /> won't be disabled. Another way to check for a dead link could be to count<br /> incoming frames on a heavily loaded host. This is not applicable to small<br /> servers, but may be useful when the front switches send multicast<br /> information on their links (e.g. VRRP), or even health-check the servers.<br /> Use the arp_interval/arp_ip_target parameters to count incoming/outgoing<br /> frames. </pre><p style="font-family: times new roman;">--------------------------------------------------------------------------------------</p><br /><p style="font-family: times new roman;"><br /></p><p style="font-family: times new roman;">The following script (the gray area) will configure a bond interface (bond0) using two ethernet interface (eth0 and eth1). You can place it onto your on file and run it at boot time.. </p><pre class="gri">#!/bin/bash<br /><br />modprobe bonding <b>mode=0</b> miimon=100 # load bonding module<br /><br />ifconfig eth0 down # putting down the eth0 interface<br />ifconfig eth1 down # putting down the eth1 interface<br /><br />ifconfig bond0 hw ether 00:11:22:33:44:55 # changing the MAC address of the bond0 interface<br />ifconfig bond0 192.168.55.55 up # to set ethX interfaces as slave the bond0 must have an ip.<br /><br />ifenslave bond0 eth0 # putting the eth0 interface in the slave mod for bond0<br />ifenslave bond0 eth1 # putting the eth1 interface in the slave mod for bond0<br /></pre> <p>You can set up your bond interface according to your needs. Changing one parameters (mode=X) you can have the following bonding types:</p> <b>mode=0 (balance-rr)</b><br />Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.<br /><br /> <b>mode=1 (active-backup)</b><br />Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.<br /><br /> <b>mode=2 (balance-xor)</b><br />XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.<br /><br /> <b>mode=3 (broadcast)</b><br />Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.<br /><br /> <b>mode=4 (802.3ad)</b><br />IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.<br /><pre> <i>Pre-requisites:<br /> 1. Ethtool support in the base drivers for retrieving<br /> the speed and duplex of each slave.<br /> 2. A switch that supports IEEE 802.3ad Dynamic link<br /> aggregation.<br /> Most switches will require some type of configuration<br /> to enable 802.3ad mode.</i></pre> <b>mode=5 (balance-tlb)</b><br />Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.<br /><pre> <i>Prerequisite:<br /> Ethtool support in the base drivers for retrieving the<br /> speed of each slave.</i></pre> <b>mode=6 (balance-alb)</b><br />Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.<br /><br /> The most used are the first four mode types...<br /><br />Also you can use multiple bond interface but for that you must load the bonding module as many as you need.<br />Presuming that you want two bond interface you must configure the /etc/modules.conf as follow: <pre><i> alias bond0 bonding<br /> options bond0 -o bond0 mode=0 miimon=100<br /> alias bond1 bonding<br /> options bond1 -o bond1 mode=1 miimon=100</i></pre>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-17244768385674365862007-09-20T04:51:00.000-07:002007-09-20T04:59:00.857-07:00Linux Shell Script - Random number generationA small note to generate random numbers in Linux scripts:<br /><br /> <ul><li>To Generate Random numbers between 0 to 10:<br /></li></ul> echo $[($RANDOM % 10)] <p class="MsoNormal"><!--[if !supportEmptyParas]--> <o:p></o:p></p> <p class="MsoNormal"><br /></p><ul><li>To Generate Random numbers between 1 to 10:</li></ul> <p class="MsoNormal"> echo $[($RANDOM % 10) + 1]</p><br /><p class="MsoNormal"><br /></p> <p class="MsoNormal"><!--[if !supportEmptyParas]--> <!--[endif]--><o:p></o:p></p> <ul><li>To Generate Random numbers between 30 to 40:</li></ul> <p class="MsoNormal"> my_random is a function which accept two integers,</p> <p class="MsoNormal"><span style=""> </span></p> <p class="MsoNormal"><span style="">#! /bin/bash<o:p></o:p></span></p> <p class="MsoNormal"><span style=""><!--[if !supportEmptyParas]--> <o:p></o:p></span></p> <p class="MsoNormal"><span style="">my_random()<o:p></o:p></span></p> <p class="MsoNormal"><span style="">{<o:p></o:p></span></p> <p class="MsoNormal"><span style="">number=$[($RANDOM % $2) + 1]<o:p></o:p></span></p> <p class="MsoNormal"><span style="">while [ $number -lt $1 ]<o:p></o:p></span></p> <p class="MsoNormal"><span style="">do<o:p></o:p></span></p> <p class="MsoNormal"><span style="">number=$[($RANDOM % $2) + 1]<o:p></o:p></span></p> <p class="MsoNormal"><span style="">done<o:p></o:p></span></p> <p class="MsoNormal"><span style="">echo $number<o:p></o:p></span></p> <p class="MsoNormal"><span style=""><!--[if !supportEmptyParas]-->}<br /><!--[endif]--><o:p></o:p></span></p> <p class="MsoNormal"><span style="">my_random 30 40</span></p>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com1tag:blogger.com,1999:blog-4432468942521109730.post-16193711037799265522007-08-30T15:20:00.001-07:002007-08-30T15:36:17.556-07:00Troubleshooting Tips<span style="font-size:85%;">These tips are copied from some other sites. Basically I am trying to accumulate all informations needed for Linux support to this blog ....</span><br /><br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Up to date?<br /><br /></span>If you are installing new hardware, or having trouble with anything new, do you have the right drivers for any hardware being added? Just because the OS HAS a driver doesn't mean that it is a GOOD driver- for example, SCO OSR5 had a driver for Intel Pro100 cards, but if you wanted something that WORKED, you had to go to SCO's FTP site and download a good driver.<br />On a similar note, do you have the current recommended patches and updates? Sometimes this is all it takes to fix your problem. It's certainly worth checking.<br /><br /><span style="font-weight: bold;">Evidence</span><br />The first rule of troubleshooting is to do no harm. Another way to put that is to say "don't trample all over the evidence". If you aren't careful and methodical in your approach, you may destroy clues that could help you narrow down the source of your problem. For example, Unix and Linux keep a "last accessed" date for files and directories. You can see that date by using "ls -u filename" or "ls -du directoryname". That date can be a critical clue as we'll see in a moment, but it's so easy to lose it. Try this experiment: cd /tmp and make a new directory "testdate". Do<br /><br />cd /tmp<br />mkdir testdate<br />touch testdate/a<br />ls -lud testdate<br />sleep 60<br />ls -lud testdate<br /># still the same date<br />ls -l testdate<br /># nothing to see in there, but<br />ls -lud testdate<br /># now the access date has changed<br />The same change will take place to an ordinary file if you cat it, copy it or access it in any way (because it's the access date!).<br />(Confused? The "ls -lud" reads the /tmp directory to get info about "testdate". The "ls -l" looks into "testdate" so that's access.)<br />Needing or wanting to know when something was last accessed comes up all the time, but just a few examples might give you some ideas:<br />A directory was supposed to be on the backup but isn't. Did the backup program access that directory?<br />A misbehaving program is supposed to use a certain file during its startup. Did it?<br />What files does a program try to use? Knowing this can sometimes help you track down where a program is failing when it is too dumb to tell you itself.<br /><br /><br /><br />Unix systems keep two other dates: modify, and inode change. The modified date is what you see when you use "ls -l" and is the date that the file has been changed (or created, if nobody has changed it since). The inode change date (ls -lc) reflects the time that permissions or ownership have changed (but note that anything that affects the modified time also affects this).<br />Some systems have a command line "stat" command that shows all three times (and more useful info) at once. Here's the output from "stat" on a Red Hat Linux system:<br /><br />[tony@linux tony]$ stat .bashrc<br /> File: ".bashrc"<br /> Size: 124 Filetype: Regular File<br /> Mode: (0644/-rw-r--r--) Uid: ( 500/ tony) Gid: ( 500/ tony)<br />Device: 3,6 Inode: 19999 Links: 1 <br />Access: Fri Aug 18 07:07:17 2000(00000.00:00:13)<br />Modify: Mon Oct 25 03:51:14 1999(00298.03:16:16)<br />Change: Mon Oct 25 03:51:14 1999(00298.03:16:16)<br />Note that "Modify" and "Change" are the same. That's becaue "Change" will also reflect the time that a file was modified. However, if we used chmod or chown on this file, the "Change" date would reflect that and "Modify" would not.<br />You can get a similar listing on SCO, but you need to know the inode (ls -i), the filesystem the file is on, and then run fsdb on that file system. It's not usually worth the trouble, but it does also tell you where the file's disk blocks are to be found:<br /><br /># cd /tmp<br /># l -i y<br />33184 -rw-r--r-- 1 root sys 143 Aug 11 14:01 y<br />(33184 is the inode number of the file "y")<br /># fsdb /dev/root<br />/dev/root(): HTFS File System<br />FSIZE = 1922407, ISIZE = 480608<br />33184i<br />i#: 33184 md: f---rw-r--r-- ln: 1 uid: 0 gid: 3 sz: 143<br />a0:337145 a1: 0 a2: 0 a3: 0 a4: 0 a5: 0 a6: 0 <br />a7: 0 a8: 0 a9: 0 a10: 0 a11: 0 a12: 0 <br />at: Tue Aug 15 11:06:35 2000<br />mt: Fri Aug 11 14:01:10 2000<br />ct: Fri Aug 11 14:01:10 2000<br />This tells me the three times, and also that the entire file is located in one block (block 337145). If it were larger, and the next block was not 337146, I'd also know that the file is fragmented (once you get above a9, the rules change; see BFIND.C for a brief introduction to that).<br />The inode time can be very illuminating: suppose a problem started yesterday, and you can see that "ls -c" shows an inode change then but no "ls -l" change: it might be that ownership or permissions have been changed and are causing your problem.<br />Of course, if you don't know what the ownership or permissions should be, this may not help a lot. Some systems have a database of file ownership and permissions and can report discrepancies and even return the files to their proper state. SCO Unix, for example, can use the "custom" command to verify software. Old Xenix systems had "fixperm". Linux systems using RPM can verify packages; other Linux package managers have similar capabilities. Add-on security packages like COPS and others can also be useful: although their concern is the security aspects of files changing, their watchfulness can be useful in troubleshooting contexts also.<br />If you need to know what files a process is using, the "lsof" command (standard with Linux, available from Skunkware for SCO) can help. Here's an example from a Linux system:<br /><br /># lsof -p 1748<br /><br />COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME<br />httpd 1748 root cwd DIR 8,5 4096 2 /<br />httpd 1748 root rtd DIR 8,5 4096 2 /<br />httpd 1748 root txt REG 8,6 242860 361425 /usr/sbin/httpd<br />httpd 1748 root mem REG 8,5 494250 216911 /lib/ld-2.2.4.so<br />httpd 1748 root mem REG 8,6 10007 264042 /usr/lib/apache/mod_vhost_alias.so<br />httpd 1748 root mem REG 8,6 8169 263597 /usr/lib/apache/mod_env.so<br />httpd 1748 root mem REG 8,6 17794 263604 /usr/lib/apache/mod_log_config.so<br />httpd 1748 root mem REG 8,6 7562 263603 /usr/lib/apache/mod_log_agent.so<br />httpd 1748 root mem REG 8,6 8558 263605 /usr/lib/apache/mod_log_referer.so<br />httpd 1748 root mem REG 8,6 8142 263596 /usr/lib/apache/mod_dir.so<br />httpd 1748 root mem REG 8,6 370 117962 /usr/lib/locale/en_US/LC_IDENTIFICATION<br />httpd 1748 root mem REG 8,5 531205 110159 /lib/i686/libpthread-0.9.so<br />(many lines deleted)<br />This tool can also list open network connections. For example, if you need to know what process are listening on port 137 , you would use:<br /><br /># lsof -i :137<br />COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME<br />nmbd 903 root 5u IPv4 1255 UDP *:netbios-ns<br />nmbd 903 root 7u IPv4 1260 UDP apl:netbios-ns<br />Another piece of evidence that can be very helpful is when someone has logged in. The "last" comand will give you that information, including how long they were on the system. But "last" gets its data from /etc/wtmp, so you may want to get information about wtmp before running last (keep in mind that wtmp is affected by logins and logouts, so unless you were logged in BEFORE whatever problem you are looking for started, your login has changed wtmp).<br />How long the system itself has been up is available from "w" or "uptime". Other evidence you may want to collect before beginning a trouble search includes:<br />• df -v<br />• du -s<br />• memsize<br />System logs may be very useful. However, before you look at them, get the "ls -l", "ls -c" and "ls -u" from the logs- this tells you when the log was last used, etc.- if there's no current information and there should be, those dates are important clues..<br />You need to know what to look for in the logs. SCO makes it easier by using words like "CONFIG" "NOTICE" and "WARNING" that you can grep for, but on other OS'es you may have to just look manually until you can figure out what sort of key words they would use.<br />Another often overlooked clue is "what processes are using this file?". You get the answer to that with "fuser", which will return a list of the processes using a file ("lsof" is another very useful tool in this context).<br />Software problems can be tough. If you have "trace" (Linux "strace", Mac OS X "ktrace" and "kdump"), knowing what files a program tries to read and write can be very useful. A first attempt is to run the program like this:<br /><br />trace badprog 2> /tmp/badprog.trace<br />Then examine /tmp/badprog.trace, particularly noting access, open, read and write calls. These will all have the general format like this:<br /><br />_open ("/etc/default/lang", 0x0) = 4<br />read (4, "#\t@(#) lang.src 58.1 96/10/09 \n#\n#\tCopyr".., 1024) = 437<br />...<br />_open ("/usr/lib/lang/C/C/C/collate", 0x0) = 5<br />read (5, "".., 1) = 1<br />The return value of the call is your clue- if it's positive, it's usually OK.For example, the lines above mean that /etc/default/lang was opened correctly ( we would have got a negative number instead of 4 if it failed) and then 437 bytes were read from it- see the "read(4.." ? That 4 means it's reading /etc/default/lang. Later on it opens "/usr/lib/lang/C/C/C/collate" and the return value is "5", the next read is from that file (because it's "read(5..". As you can see, you really don't need to be a programmer or even understand much about this to be able to (possibly) find a problem here (do watch for "close", though- if you aren't paying attention you won't know what file is being read because the numbers get reused).<br />Sometimes it's helpful to know WHAT files got modified by an application. To find that out, this can help:<br /><br />touch /tmp/rightnow<br />runapplication<br />find / -newer /tmp/rightnow<br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Keep it simple, stupid<br /><br /></span>The kiss ("keep it simple stupid") principle is always good to follow. Removing unnecessary hardware, making the network smaller, etc. are good examples. Dirk Hart posted this in response to a problem that involved one of those nasty sub-nets that make our brains hurt:<br /><br /><br />In this situation, I would use a netmask of 255.255.255.0 on the '486<br />(and on another pc to avoid confusion) and determine that the card is<br />indeed working on the 486. _Then_ I would worry about binary arithmetic.<br /><br />'It don't mean a thing if it ain't got that ping'<br /><br />Regards, Dirk<br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Testing Specific Systems<br /><br /></span>You may have a very good idea of what you need to concentrate on. If a printer isn't working, bad disk blocks aren't a likely suspect. However, the problem may be less obvious: data corruption in the middle of a file could be bad disk blocks, bad memory, bad cpu, bad disk controller, a bad DMA controller.. how do you narrow this stuff down? Well, honestly, sometimes it can be hellishly difficult, particularly if you are pressed for time. If you don't know where to start, and you aren't sure if the problem is hardware or software, checking the hardware is probably the best place to start. Unfortunately, some of the things you'll need to for that require that the system be in single user mode.<br />Single user mode does NOT mean that just one person is logged in. Specifically, it means that the system has been brought to init state 1, usually by typing "init 1" while logged in as root at the console.<br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Things you can check while still multi-user<br /><br /></span>Log files. On some sytems, "dmesg" gives you a lot of information, on others (SCO, for instance) it doesn't give you much and you have to look at the logs yourself. You'll find these logs in /var/log or /usr/adm (SCO). The "syslog" is often particularly useful. If printing is the problem, the printlogs are in /usr/spool/lp/logs (SCO), or the "lf" entry in /etc/printcap points you to them.<br />Network statistics. If the network isn't working right, all kinds of other things may be in trouble. To be good at this, you really need to understand networking more than I want to try to cover here, but I will cover some basic points:<br />• If you can ping 127.0.0.1, your tcp/ip is working but that says NOTHING about your NIC card. As pinging your own ip address will get re-routed to 127.0.0.1, pinging your own ip address won't prove anything about your NIC either.<br />• Always try netstat and arp commands with the "-n" flags to avoid name resolution lookups- this keeps everything local and aids your troubleshooting.<br />You can see what network ports are in use with "netstat -an" (lsof is also useful for this). Let's just take a quick look :<br /><br />Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp 0 0 64.13.44.12.1254 209.167.40.69.119 ESTABLISHED<br />tcp 0 0 64.13.44.12.1252 63.209.14.66.119 CLOSE_WAIT<br />tcp 0 0 64.13.44.12.1216 216.71.1.37.80 CLOSE_WAIT<br />tcp 0 0 10.1.36.3.1085 10.1.36.100.23 ESTABLISHED<br />tcp 0 0 *.80 *.* LISTEN<br />What can I tell from this? First, I'm reading news from 209.167.40.69. How do I know that? See the "119" that follows that address? If I "grep 119 /etc/services, I get:<br /><br />nntp 119/tcp readnews untp # USENET News Transfer Protocol<br />I also was reading news from another site, but I'm not presently- that's why that line says "CLOSE_WAIT" rather than "ESTABLISHED". I had a connection to a web page (the "80"- grep it from /etc/services if you don't recognize it) and I have a telnet session (23) open to 10.1.36.100. This machine also has a web-server, and it's LISTENing on port 80. Of course, none of this means anything unless you know what's SUPPOSED to be going on. So, if the web server isn't working, and you "netstat -an | grep LISTEN" and don't see *.80 in the list, that's why it isn't working. Now WHY isn't it working? If you try to start it and it fails, its logs will likely tell you what the problem is. If not, then maybe trace or strace can give you a clue.<br />A system that is running, but slowly, can be hard to figure out. The slowness is either coming from the CPU or the hard drive. The "sar" program (available for Linux, too) will let you figure out which pretty quickly. First try"sar 5 5". That gives you information about your CPU. Then, "sar -d 5 5" will tell you about your hard drives. The only problem is that, unless it's flat obvious (CPU at 0% idle or hard drives showing 1,000 msec in the avserv column), you don't know if what you are seeing is normal or abnormal for this machine running its typical load. If sar has been set up and employed properly, you'll have historical data (in /var/adm/sa on SCO) to compare, but otherwise you have to just use your best judgement, and if you haven't had much experience on similar systems, you just won't know. See Sarcheck Review(May 1999) for more information.<br />If CPU is the problem, a simple "ps -e | sort -r +2 | head" can tell you a lot, particularly if you put it in a loop like this:<br />while true<br />do<br />ps -e | sort -r +2 | head<br />sleep 10<br />done<br />If that shows a process gaining a substantial amount of time during the 10 second sleep, that process is using a lot of time- if it gained 5 seconds, for example, it is using 50% of your cpu! You might also use "ps -ef" and look at the "C" column (just before the time)- that's total CPU usage.<br />Skunkware has "iohog", "memhog", and "cpuhog". These can help you pinpoint misbehaving processes. Download them all from http://www.caldera.com/skunkware/osr5/vols/hog-1.1-VOLS.tar<br />If you are slow on a network connection, "netstat -in" and (on SCO) "llistat -l" can show you problems. Don't let DNS issues confuse you: if, for example, a telnet session is slow to connect but then works fine, that's DNS- either the connecting machine can't resolve the server's name or vice-versa. Remember that: the server is going to try to resolve the name of the client, and if it can't, there will be a delay- possibly a long delay.<br />Sometimes it helps to know just what equipment is in the machine. SCO has the "hw -v" command and "hwconfig", Windows has its Control Panel -> System -> Device Manager, and linux has lsdev, lspci, pnpdump and, of course, dmesg.<br />See Why is my system slow for more on this.<br />If you are having troubles, particularly during an installation, take out any hardware you don't need right now. You can always put it in later.<br />If you can identify a good-sized file that is not being used by anyone (check with fuser or lsof), running repeated "sum -r"'s on that file should always produce the same result. If it does not, and you are certain no-one else is modifying the file, then the hard disk, memory, disk controller, dma controller or cpu could be suspect. Gee, doesn't that narrow it down? Well, maybe we can get a little better:<br />The first time you sum the file, the information should be read from the hard drive directly unless it is already in cache( to effectively flush the cache, you need to know how big it is so you can sum, or just cat to /dev/null, some other, larger file which will overwrite any trace of this file). On SCO, the cache size can be found by<br /><br />grep "i/o bufs" /usr/adm/messages<br />On linux "dmesg | grep cache" will get you what you need. If you pick your file size to be slightly less than cache, you've got a good shot at getting it all into memory (assuming no one else is using the machine just then).<br />The second time you sum, some portion or even all of the file will be read from memory, so if the sum's were constant after flushing cache, but not otherwise, memory would be suspect.<br />You might get a clue about DMA by testing other devices that use it- the floppy disk is a good candidate: if data can be reliably written and read from floppy (use sum -r to check it) but hard drive data is changing, then it's probably NOT memory or motherboard problems.<br /><br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Things you need to be in single-user mode for<br /><br /></span>The first thing you will want to do is run fsck. Except for root, file systems should be unmounted when running this on them- you could umount /home while multiuser, of course, but since you are going to be checking all of them, you may as well be single-user. Be sure to check the man page; SCO, for example, doesn't do a full fsck on their modern versions unless you specify "-ofull" (see Filesystems for a more complete discussion of fsck).<br />What does fsck tell you? Well, if you have hard disk problems, fsck is going to trip over them, so that can be useful. Because fsck uses a fair amount of memory as it's working, memory problems will probably affect it -unfortunately, the effect may be unpleasant, so if you suspect that memory could be an issue, try creating a floppy file system first ( just "mkfs /dev/fd0" is sufficient ) and fsck it- that won't make you 100% sure that fsck isn't going to ruin your life for reasons beyond its control, but it gives you a little more confidence. I wouldn't fret over this- if you've had bad memory problems, things on your disk are likely not healthy already.<br />After fsck, you want to check the hard drive. SCO has "badtrk", Linux has "badblocks". Read the man pages carefully- you do NOT want to accidentally destroy your system, and these things can.<br /><br /><br /><span style="font-weight: bold;"><span style="color: rgb(255, 0, 0);font-size:130%;" >Rebooting</span><br /><br /></span>If your problems began after linking a new kernel, maybe you should boot with the previous version. On SCO, you automatically get "unix.old"; just type that at the Boot prompt. On Linux, you get the old kernel if you bothered to arrange for that in lilo.conf (or whatever your boot manger uses). Try hitting TAB at the boot prompt to see if you have other choices.<br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Logging<br /><br /></span>Maybe you still can't figure it out. Some file is getting trashed for unknown reasons and you can't get a handle on it. Perhaps you can set up some cron or background process to try to catch whatever is going on in the act. Something like this:<br /><br />while true<br />do<br />sleep 300<br />sum -r suspect_file<br />fuser (or lsof) suspect file<br />done > /tmp/mylog &<br />You can get more sophisticated; if I have to leave this for days or weeks I send the sum to temporary files, compare them and delete them if nothing has changed; or I'll collect the fuser output and run it through "ps" so I can see who or what was responsible.<br /><br /><br /><span style="color: rgb(255, 0, 0);font-size:130%;" >Seeing the invisible</span><br /><br />Sometimes you need to see what's really going on without interpretation. For dumb terminals, you can often turn on a "monitor" mode in the terminal's setup that will then display the hex characters of any control characters rather than acting on them. For a telnet session, you can escape to the telnet prompt (generally CTRL-] ) and type "set netdata", then escape again and "set prettydump".<br />Another way to capture everything is to use "script". For example, "script /tmp/mystuff". Then do whatever you are doing, and when you are done, CTRL-D to end script. The /tmp/mystuff contains everything that happened, including control characters; you can examine it with vi or hd.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-74672694821769327082007-08-14T16:49:00.000-07:002007-08-14T16:52:28.020-07:00Mdadm problems adding disk back to software raid arrayWhat you did should have worked. But since it didn’t, now would be a good time to backup anything of value on the system.<br /><br />If the problem persists after a clean reboot, try adding the missing raid members after booting into rescue mode or booting using the CentOS Live-CD, but don’t search/mount/chroot the installation. Rescue mode will probably not start the raids without search/mount, so:<br /><br /># mdadm -A --run /dev/md1 /dev/sda2<br /># mdadm /dev/md1 -a /dev/sdb2<br /><br /><br />If you still cannot add the missing members, try recreating a raid1 from rescue mode or Live-CD:<br /><br /># mdadm -S /dev/md1<br /># mdadm -C /dev/md1 -l1 -n2 /dev/sda2 missing<br /># mdadm /dev/md1 -a /dev/sdb2<br /><br /><br />If that doesn’t work, overwrite the first few MB of the sdb partition(s), reboot and try adding it/them again.<br /><br /># dd if=/dev/zero of=/dev/sdb2 bs=1M count=8<br /># init 6<br /># # After reboot<br /># mdadm /dev/md1 -a /dev/sdb2<br /><br /><br />If at any point you reassemble the raids, it would be a good idea to let them rebuild before rebooting.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-62478627057677371582007-05-28T01:53:00.001-07:002007-05-28T01:53:51.968-07:00100 Linux Questions1. You attempt to use shadow passwords but are unsuccessful. What characteristic of the /etc/passwd file may cause this? Choose one: a. The login command is missing. b. The username is too long. c. The password field is blank. d. The password field is prefaced by an asterick.<br /> 2. You create a new user account by adding the following line to your /etc/passwd file. bobm:baddog:501:501:Bob Morris:/home/bobm:/bin/bash Bob calls you and tells you that he cannot logon. You verify that he is using the correct username and password. What is the problem? Choose one: a. The UID and GID cannot be identical. b. You cannot have spaces in the line unless they are surrounded with double quotes. c. You cannot directly enter the password; rather you have to use the passwd command to assign a password to the user. d. The username is too short, it must be at least six characters long.<br /> 3. Which of the following tasks is not necessary when creating a new user by editing the /etc/passwd file? Choose one: a. Create a link from the user’s home directory to the shell the user will use. b. Create the user’s home directory c. Use the passwd command to assign a password to the account. d. Add the user to the specified group.<br /> 4. You create a new user by adding the following line to the /etc/passwd file bobm::501:501:Bob Morris:/home/bobm:/bin/bash You then create the user’s home directory and use the passwd command to set his password. However, the user calls you and says that he cannot log on. What is the problem? Choose one: a. The user did not change his password. b. bobm does not have permission to /home/bobm. c. The user did not type his username in all caps. d. You cannot leave the password field blank when creating a new user.<br /> 5. When using useradd to create a new user account, which of the following tasks is not done automatically. Choose one: a. Assign a UID. b. Assign a default shell. c. Create the user’s home directory. d. Define the user’s home directory.<br /> 6. You issue the following command useradd -m bobm But the user cannot logon. What is the problem? Choose one: a. You need to assign a password to bobm’s account using the passwd command. b. You need to create bobm’s home directory and set the appropriate permissions. c. You need to edit the /etc/passwd file and assign a shell for bobm’s account. d. The username must be at least five characters long.<br /> 7. You have created special configuration files that you want copied to each user’s home directories when creating new user accounts. You copy the files to /etc/skel. Which of the following commands will make this happen? Choose one: a. useradd -m username b. useradd -mk username c. useradd -k username d. useradd -Dk username<br /> 8. Mary has recently gotten married and wants to change her username from mstone to mknight. Which of the following commands should you run to accomplish this? Choose one: a. usermod -l mknight mstone b. usermod -l mstone mknight c. usermod -u mknight mstone d. usermod -u mstone mknight<br /> 9. After bob leaves the company you issue the command userdel bob. Although his entry in the /etc/passwd file has been deleted, his home directory is still there. What command could you have used to make sure that his home directory was also deleted? Choose one: a. userdel -m bob b. userdel -u bob c. userdel -l bob d. userdel -r bob<br /> 10. All groups are defined in the /etc/group file. Each entry contains four fields in the following order. Choose one: a. groupname, password, GID, member list b. GID, groupname, password, member list c. groupname, GID, password, member list d. GID, member list, groupname, password<br /> 11. You need to create a new group called sales with Bob, Mary and Joe as members. Which of the following would accomplish this? Choose one: a. Add the following line to the /etc/group file: sales:44:bob,mary,joe b. Issue the command groupadd sales. c. Issue the command groupadd -a sales bob,mary,joe d. Add the following line to the /etc/group file: sales::44:bob,mary,joe<br /> 12. What command is used to remove the password assigned to a group?<br /> 13. You changed the GID of the sales group by editing the /etc/group file. All of the members can change to the group without any problem except for Joe. He cannot even login to the system. What is the problem? Choose one: a. Joe forgot the password for the group. b. You need to add Joe to the group again. c. Joe had the original GID specified as his default group in the /etc/passwd file. d. You need to delete Joe’s account and recreate it.<br /> 14. You need to delete the group dataproject. Which two of the following tasks should you do first before deleting the group? A. Check the /etc/passwd file to make sure no one has this group as his default group. B. Change the members of the dataproject group to another group besides users. C. Make sure that the members listed in the /etc/group file are given new login names. D. Verify that no file or directory has this group listed as its owner. Choose one: a. A and C b. A and D c. B and C d. B and D<br /> 15. When you look at the /etc/group file you see the group kmem listed. Since it does not own any files and no one is using it as a default group, can you delete this group?<br /> 16. When looking at the /etc/passwd file, you notice that all the password fields contain ‘x’. What does this mean? Choose one: a. That the password is encrypted. b. That you are using shadow passwords. c. That all passwords are blank. d. That all passwords have expired.<br /> 17. In order to improve your system’s security you decide to implement shadow passwords. What command should you use?<br /> 18. What file contains the default environment variables when using the bash shell? Choose one: a. ~/.profile b. /bash c. /etc/profile d. ~/bash<br /> 19. You have created a subdirectory of your home directory containing your scripts. Since you use the bash shell, what file would you edit to put this directory on your path? Choose one: a. ~/.profile b. /etc/profile c. /etc/bash d. ~/.bash<br /> 20. Which of the following interprets your actions when typing at the command line for the operating system? Choose One a. Utility b. Application c. Shell d. Command<br /> 21. What can you type at a command line to determine which shell you are using?<br /> 22. You want to enter a series of commands from the command-line. What would be the quickest way to do this? Choose One a. Press enter after entering each command and its arguments b. Put them in a script and execute the script c. Separate each command with a semi-colon (;) and press enter after the last command d. Separate each command with a / and press enter after the last command<br /> 23. You are entering a long, complex command line and you reach the right side of your screen before you have finished typing. You want to finish typing the necessary commands but have the display wrap around to the left. Which of the following key combinations would achieve this? Choose One a. Esc, /, Enter b. /, Enter c. ctrl-d, enter d. esc, /, ctrl-d<br /> 24. After typing in a new command and pressing enter, you receive an error message indicating incorrect syntax. This error message originated from.. Choose one a. The shell b. The operating system c. The command d. The kernel<br /> 25. When typing at the command line, the default editor is the _____________ library.<br /> 26. You typed the following at the command line ls -al /home/ hadden. What key strokes would you enter to remove the space between the ‘/’ and ‘hadden’ without having to retype the entire line? Choose one a. Ctrl-B, Del b. Esc-b, Del c. Esc-Del, Del d. Ctrl-b, Del<br /> 27. You would like to temporarily change your command line editor to be vi. What command should you type to change it?<br /> 28. After experimenting with vi as your command line editor, you decide that you want to have vi your default editor every time you log in. What would be the appropriate way to do this? Choose one a. Change the /etc/inputrc file b. Change the /etc/profile file c. Change the ~/.inputrc file d. Change the ~/.profile file<br /> 29. You have to type your name and title frequently throughout the day and would like to decrease the number of key strokes you use to type this. Which one of your configuration files would you edit to bind this information to one of the function keys?<br /> 30. In your present working directory, you have the files maryletter memo1 MyTelephoneandAddressBook What is the fewest number of keys you can type to open the file MyTelephoneandAddressBook with vi? Choose one a. 6 b. 28 c. 25 d. 4<br /> 31. A variable that you can name and assign a value to is called a _____________ variable.<br /> 32. You have installed a new application but when you type in the command to start it you get the error message Command not found. What do you need to do to fix this problem? Choose one a. Add the directory containing the application to your path b. Specify the directory’s name whenever you run the application c. Verify that the execute permission has been applied to the command. d. Give everyone read, write and execute permission to the application’s directory.<br /> 33. You telnet into several of your servers simultaneously. During the day, you sometimes get confused as to which telnet session is connected to which server. Which of the following commands in your .profile file would make it obvious to which server you are attached? Choose one a. PS1=’\h: \w>’ b. PS1=’\s: \W>’ c. PS1=’\!: \t>’ d. PS1=’\a: \n>’<br /> 34. Which of the following environment variables determines your working directory at the completion of a successful login? Choose one a. HOME b. BASH_ENV c. PWD d. BLENDERDIR<br /> 35. Every time you attempt to delete a file using the rm utility, the operating system prompts you for confirmation. You know that this is not the customary behavior for the rm command. What is wrong? Choose one a. rm has been aliased as rm -i b. The version of rm installed on your system is incorrect. c. This is the normal behavior of the newest version of rm. d. There is an incorrect link on your system.<br /> 36. You are running out of space in your home directory. While looking for files to delete or compress you find a large file called .bash_history and delete it. A few days later, it is back and as large as before. What do you need to do to ensure that its size is smaller? Choose one a. Set the HISTFILESIZE variable to a smaller number. b. Set the HISTSIZE to a smaller number. c. Set the NOHISTFILE variable to true. d. Set the HISTAPPEND variable to true.<br /> 37. In order to display the last five commands you have entered using the history command, you would type ___________.<br /> 38. In order to display the last five commands you have entered using the fc command, you would type ___________.<br /> 39. You previously ran the find command to locate a particular file. You want to run that command again. What would be the quickest way to do this? Choose one a. fc -l find fc n b. history -l find history n c. Retype the command d. fc -n find<br /> 40. Using command substitution, how would you display the value of the present working directory? Choose one a. echo $(pwd) b. echo pwd c. $pwd d. pwd | echo<br /> 41. You need to search the entire directory structure to locate a specific file. How could you do this and still be able to run other commands while the find command is still searching for your file? Choose one a. find / -name filename & b. find / -name filename c. bg find / -name filename d. &find / -name filename &<br /> 42. In order to create a file called DirContents containing the contents of the /etc directory you would type ____________.<br /> 43. What would be displayed as the result of issuing the command ps ef? Choose one a. A listing of the user’s running processes formatted as a tree. b. A listing of the stopped processes c. A listing of all the running processes formatted as a tree. d. A listing of all system processes formatted as a tree.<br /> 44. What utility can you use to show a dynamic listing of running processes? __________<br /> 45. The top utility can be used to change the priority of a running process? Another utility that can also be used to change priority is ___________?<br /> 46. What key combination can you press to suspend a running job and place it in the background?<br /> 47. You issue the command jobs and receive the following output: [1]- Stopped (tty output) pine [2]+ Stopped (tty output) MyScript How would you bring the MyScript process to the foreground? Choose one: a. fg %2 b. ctrl-c c. fg MyScript d. ctrl-z<br /> 48. You enter the command cat MyFile | sort > DirList & and the operating system displays [4] 3499 What does this mean? Choose one a. This is job number 4 and the PID of the sort command is 3499. b. This is job number 4 and the PID of the job is 3499. c. This is job number 3499 and the PID of the cat command is 4. d. This is job number 4 and the PID of the cat command is 3499.<br /> 49. You attempt to log out but receive an error message that you cannot. When you issue the jobs command, you see a process that is running in the background. How can you fix this so that you can logout? Choose one a. Issue the kill command with the PID of each running command of the pipeline as an argument. b. Issue the kill command with the job number as an argument. c. Issue the kill command with the PID of the last command as an argument. d. Issue the kill command without any arguments.<br /> 50. You have been given the job of administering a new server. It houses a database used by the sales people. This information is changed frequently and is not duplicated anywhere else. What should you do to ensure that this information is not lost? Choose one a. Create a backup strategy that includes backing up this information at least daily. b. Prepare a proposal to purchase a backup server c. Recommend that the server be made part of a cluster. d. Install an additional hard drive in the server.<br /> 51. When planning your backup strategy you need to consider how often you will perform a backup, how much time the backup takes and what media you will use. What other factor must you consider when planning your backup strategy? _________<br /> 52. Many factors are taken into account when planning a backup strategy. The one most important one is how often does the file ____________.<br /> 53. Which one of the following factors does not play a role in choosing the type of backup media to use? Choose one: a. How frequently a file changes b. How long you need to retain the backup c. How much data needs to be backed up d. How frequently the backed up data needs to be accessed<br /> 54. When you only back up one partition, this is called a ______ backup. Choose one a. Differential b. Full c. Partial d. Copy<br /> 55. When you back up only the files that have changed since the last backup, this is called a ______ backup. Choose one a. Partial b. Differential c. Full d. Copy<br /> 56. The easiest, most basic form of backing up a file is to _____ it to another location.<br /> 57. When is the most important time to restore a file from your backup? Choose one a. On a regular scheduled basis to verify that the data is available. b. When the system crashes. c. When a user inadvertently loses a file. d. When your boss asks to see how restoring a file works.<br /> 58. As a system administrator, you are instructed to backup all the users’ home directories. Which of the following commands would accomplish this? Choose one a. tar rf usersbkup home/* b. tar cf usersbkup home/* c. tar cbf usersbkup home/* d. tar rvf usersbkup home/*<br /> 59. What is wrong with the following command? tar cvfb / /dev/tape 20 Choose one a. You cannot use the c option with the b option. b. The correct line should be tar -cvfb / /dev/tape20. c. The arguments are not in the same order as the corresponding modifiers. d. The files to be backed up have not been specified.<br /> 60. You need to view the contents of the tarfile called MyBackup.tar. What command would you use? __________<br /> 61. After creating a backup of the users’ home directories called backup.cpio you are asked to restore a file called memo.ben. What command should you type?<br /> 62. You want to create a compressed backup of the users’ home directories so you issue the command gzip /home/* backup.gz but it fails. The reason that it failed is that gzip will only compress one _______ at a time.<br /> 63. You want to create a compressed backup of the users’ home directories. What utility should you use?<br /> 64. You routinely compress old log files. You now need to examine a log from two months ago. In order to view its contents without first having to decompress it, use the _________ utility.<br /> 65. Which two utilities can you use to set up a job to run at a specified time? Choose one: a. at and crond b. atrun and crontab c. at and crontab d. atd and crond<br /> 66. You have written a script called usrs to parse the passwd file and create a list of usernames. You want to have this run at 5 am tomorrow so you can see the results when you get to work. Which of the following commands will work? Choose one: a. at 5:00 wed usrs b. at 5:00 wed -b usrs c. at 5:00 wed -l usrs d. at 5:00 wed -d usrs<br /> 67. Several of your users have been scheduling large at jobs to run during peak load times. How can you prevent anyone from scheduling an at job? Choose one: a. delete the file /etc/at.deny b. create an empty file called /etc/at.deny c. create two empty files: /etc/at.deny and /etc/at.allow file d. create an empty file called /etc/at.allow<br /> 68. How can you determine who has scheduled at jobs? Choose one: a. at -l b. at -q c. at -d d. atwho<br /> 69. When defining a cronjob, there are five fields used to specify when the job will run. What are these fields and what is the correct order? Choose one: a. minute, hour, day of week, day of month, month b. minute, hour, month, day of month, day of week c. minute, hour, day of month, month, day of week d. hour, minute, day of month, month, day of week<br /> 70. You have entered the following cronjob. When will it run? 15 * * * 1,3,5 myscript Choose one: a. at 15 minutes after every hour on the 1st, 3rd and 5th of each month. b. at 1:15 am, 3:15 am, and 5:15 am every day c. at 3:00 pm on the 1st, 3rd, and 5th of each month d. at 15 minutes after every hour every Monday, Wednesday, and Friday<br /> 71. As the system administrator you need to review Bob’s cronjobs. What command would you use? Choose one: a. crontab -lu bob b. crontab -u bob c. crontab -l d. cronq -lu bob<br /> 72. In order to schedule a cronjob, the first task is to create a text file containing the jobs to be run along with the time they are to run. Which of the following commands will run the script MyScript every day at 11:45 pm? Choose one: a. * 23 45 * * MyScript b. 23 45 * * * MyScript c. 45 23 * * * MyScript d. * * * 23 45 MyScript<br /> 73. Which daemon must be running in order to have any scheduled jobs run as scheduled? Choose one: a. crond b. atd c. atrun d. crontab<br /> 74. You want to ensure that your system is not overloaded with users running multiple scheduled jobs. A policy has been established that only the system administrators can create any scheduled jobs. It is your job to implement this policy. How are you going to do this? Choose one: a. create an empty file called /etc/cron.deny b. create a file called /etc/cron.allow which contains the names of those allowed to schedule jobs. c. create a file called /etc/cron.deny containing all regular usernames. d. create two empty files called /etc/cron.allow and /etc/cron.deny<br /> 75. You notice that your server load is exceptionally high during the hours of 10 am to 2 noon. When investigating the cause, you suspect that it may be a cron job scheduled by one of your users. What command can you use to determine if your suspicions are correct? Choose one: a. crontab -u b. crond -u c. crontab -l d. crond -l<br /> 76. One of your users, Bob, has created a script to reindex his database. Now he has it scheduled to run every day at 10:30 am. What command should you use to delete this job. Choose one: a. crontab -ru bob b. crontab -u bob c. crontab -du bob d. crontab -lu bob<br /> 77. What daemon is responsible for tracking events on your system?<br /> 78. What is the name and path of the default configuration file used by the syslogd daemon?<br /> 79. You have made changes to the /etc/syslog.conf file. Which of the following commands will cause these changes to be implemented without having to reboot your computer? Choose one: a. kill SIGHINT `cat /var/run/syslogd.pid` b. kill SIGHUP `cat /var/run/syslogd.pid` c. kill SIGHUP syslogd d. kill SIGHINT syslogd<br /> 80. Which of the following lines in your /etc/syslog.conf file will cause all critical messages to be logged to the file /var/log/critmessages? Choose one: a. *.=crit /var/log/critmessages b. *crit /var/log/critmessages c. *=crit /var/log/critmessages d. *.crit /var/log/critmessages<br /> 81. You wish to have all mail messages except those of type info to the /var/log/mailmessages file. Which of the following lines in your /etc/syslogd.conf file would accomplish this? Choose one: a. mail.*;mail!=info /var/log/mailmessages b. mail.*;mail.=info /var/log/mailmessages c. mail.*;mail.info /var/log/mailmessages d. mail.*;mail.!=info /var/log/mailmessages<br /> 82. What is the name and path of the main system log?<br /> 83. Which log contains information on currently logged in users? Choose one: a. /var/log/utmp b. /var/log/wtmp c. /var/log/lastlog d. /var/log/messages<br /> 84. You have been assigned the task of determining if there are any user accounts defined on your system that have not been used during the last three months. Which log file should you examine to determine this information? Choose one: a. /var/log/wtmp b. /var/log/lastlog c. /var/log/utmp d. /var/log/messages<br /> 85. You have been told to configure a method of rotating log files on your system. Which of the following factors do you not need to consider? Choose one: a. date and time of messages b. log size c. frequency of rotation d. amount of available disk space<br /> 86. What utility can you use to automate rotation of logs?<br /> 87. You wish to rotate all your logs weekly except for the /var/log/wtmp log which you wish to rotate monthly. How could you accomplish this. Choose one: a. Assign a global option to rotate all logs weekly and a local option to rotate the /var/log/wtmp log monthly. b. Assign a local option to rotate all logs weekly and a global option to rotate the /var/log/wtmp log monthly. c. Move the /var/log/wtmp log to a different directory. Run logrotate against the new location. d. Configure logrotate to not rotate the /var/log/wtmp log. Rotate it manually every month.<br /> 88. You have configured logrotate to rotate your logs weekly and keep them for eight weeks. You are running our of disk space. What should you do? Choose one: a. Quit using logrotate and manually save old logs to another location. b. Reconfigure logrotate to only save logs for four weeks. c. Configure logrotate to save old files to another location. d. Use the prerotate command to run a script to move the older logs to another location.<br /> 89. What command can you use to review boot messages?<br /> 90. What file defines the levels of messages written to system log files?<br /> 91. What account is created when you install Linux?<br /> 92. While logged on as a regular user, your boss calls up and wants you to create a new user account immediately. How can you do this without first having to close your work, log off and logon as root? Choose one: a. Issue the command rootlog. b. Issue the command su and type exit when finished. c. Issue the command su and type logoff when finished. d. Issue the command logon root and type exit when finished.<br /> 93. Which file defines all users on your system? Choose one: a. /etc/passwd b. /etc/users c. /etc/password d. /etc/user.conf<br /> 94. There are seven fields in the /etc/passwd file. Which of the following lists all the fields in the correct order? Choose one: a. username, UID, GID, home directory, command, comment b. username, UID, GID, comment, home directory, command c. UID, username, GID, home directory, comment, command d. username, UID, group name, GID, home directory, comment<br /> 95. Which of the following user names is invalid? Choose one: a. Theresa Hadden b. thadden c. TheresaH d. T.H.<br /> 96. In order to prevent a user from logging in, you can add a(n) ________at the beginning of the password field.<br /> 97. The beginning user identifier is defined in the _________ file.<br /> 98. Which field is used to define the user’s default shell?<br /> 99. Bob Armstrong, who has a username of boba, calls to tell you he forgot his password. What command should you use to reset his command?<br /> 100. Your company has implemented a policy that users’ passwords must be reset every ninety days. Since you have over 100 users you created a file with each username and the new password. How are you going to change the old passwords to the new ones? Choose one: a. Use the chpasswd command along with the name of the file containing the new passwords. b. Use the passwd command with the -f option and the name of the file containing the new passwords. c. Open the /etc/passwd file in a text editor and manually change each password. d. Use the passwd command with the -u option.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com1tag:blogger.com,1999:blog-4432468942521109730.post-78411320119749328772007-05-16T03:47:00.000-07:002007-05-16T03:48:55.381-07:00Login to Linux server without Password using PuTTYPuTTYgen can be used to generate a key pair which will allow you to log in via SSH using public key authentication.<br /><br />PuTTY and PuTTYgen can be downloaded from:<br />http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html<br /><br />Let's get started.<br />Open PuTTYgen and under Parameters you should see the defaults of SSH-2 RSA and 1024 for number of bits in generated key. These settings are fine, and you can just leave them.<br /><br />Click the "Generate" button and a progress bar will appear. PuTTYgen will ask you to move the mouse around to "generate randomness"...just move the mouse around in the blank space using random motions while it processes...<br /><br />When it's finished, you will need to enter some information for your key file. The key comment field is basically another way of saying "name" of the key file... It tells you which key file it is... The default "key comment" will be in the form of key type and date. If you have more than one key, you will probably want to name them accordingly to tell them apart. For example: mysite-rsa-key-20050504<br /><br />Your key passphrase, if you choose to use one, is what you will have to type when connecting to the server (you can use Pageant to automatically do this for you...for a guide on Pageant, visit http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html#pageant...Pageant can also be downloaded from the location referenced above for PuTTY and PuTTYgen). If you do not wish to use a passphrase, then do not type a passphrase at this point and the key will be saved unencrypted. Not using a passphrase will allow you or anyone using the key file to automatically connect to your account, without requiring a passphrase to be entered when connecting. To set a passphrase, you'll need to type it and confirm it where asked. If you use a passphrase, just make sure that you DO NOT FORGET IT as you cannot recover it.<br /><br />Next, you will need to save your private key file. <br /><br />Click "Save private key". The save box will come up and you'll need to select a directory on your computer to save it to and type in a filename for it (be sure to leave the file type as .ppk).<br /><br />Now you'll need to upload the public key contents to your account on the server. <br /><br />You can do this process using the CNC or via SSH using the Unix shell. Brief instructions for both follow.<br /><br />Installing the public key using the CNC:<br />Navigate to your /big/dom/xDOMAIN/USERNAME (replace xDOMAIN with your xdomain and USERNAME with your account username) directory and create a directory within it named .ssh. Set the permissions on the .ssh directory to 700 (see How do I change file permissions? (chmod) if you need help with changing file permissions.)<br /><br />Within the .ssh directory, create a file named authorized_keys. Copy the entire contents of the box where it says "Public key for pasting into OpenSSH authorized_keys file" (starting at ssh-rsa) and paste them into the authorized_keys file (be sure to copy it exactly as it is and include no leading or trailing spaces or line breaks). Set the permissions on this file to 600 (see How do I change file permissions? (chmod) if you need help with changing file permissions.)<br /><br />Installing the public key from the Unix shell:<br />Log in to your account using SSH and while in the $HOME directory (/big/dom/xDOMAIN/USERNAME), do the following:<br />$ mkdir .ssh<br />$ echo "paste public key contents here" >> .ssh/authorized_keys<br />$ chmod 600 .ssh/authorized_keys<br />$ chmod 700 .ssh<br /><br />Now that you have created your key files and installed your public key on the server, it's time to start up PuTTY.<br /><br />In PuTTY, under Session, enter your Host Name - this is simply your domain name (no www or http) - ex: example.com<br /><br />Select SSH for the protocol. (You should now see 22 for the port.)<br /><br />Under SSH, choose 2 from Preferred SSH Protocol Version. <br /><br />Under SSH -> Auth in PuTTY, you will need to specify where your private key can be found. Remember this is where you saved the private key on your local computer. Click Browse to locate the file on your computer. (It will be the file with the .ppk extension.)<br /><br />If you wish to have your username automatically sent to the server when connecting, under Connection -> Data in PuTTY, you will see a field for "Auto-login username". Type your account username there.<br /><br />Save your settings to be used in future sessions - Under Sessions, type a name (such as "my site") in the Saved Sessions box and click Save.<br /><br />Now, select that session name by clicking on it and click Open.<br /><br />If you did not set PuTTY to automatically enter your username, you will need to do so when prompted. After the username has been given, if you used a passphrase when creating your key file, you should see a message that says something like:<br />Authenticating with public key "keyfilename"<br />Passphrase for key "keyfilename":<br /><br />Enter your passphrase if prompted. You should now be successfully logged in.<br /><br />To read more click http://www.aota.net/Telnet/puttykeyauth.php4Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-58610990785863647442007-05-07T00:07:00.000-07:002007-05-07T00:12:23.939-07:00How To Configure Dynamic DNS (Fedora Core 4 Setup)In this howto we will learn how to build a Dynamic DNS Server. Normally when we configure DNS, we use static entries to resolve any FQDN. If we are using DHCP in our network which gives dynamic IPs to every computer that turns on or requests one, then it is not possible to configure DNS statically. For that we should configure our DNS with DHCP in a manner that whenever a computer gets a new IP, its FQDN will be automatically updated with the new IP in DNS.<br /><br />1 Installation of Packages<br /><br />Fedora Core 4 contains a DNS (Bind) and DHCP (dhcp) packages in its CDs. You can install it from the Fedora Core 4 CDs or download it from the internet using following command.<br /><br />yum –y install bind bind-chroot bind-utils bind-libs caching-nameserver dhcp<br /><br />where<br /><br />bind ----- DNS Server Package<br />bind-chroot ----- DNS runs in chroot (jail) environment.<br />bind-libs ----- Libraries needed in using bind, bind-utils<br />bind-utils ----- Contains utilities like nslookup, host, dig etc.<br />caching-nameserver ----- give caching capabilities to store records in cache. <br />dhcp ----- Dynamic Host Configuration Protocol Package. <br /><br />2 Configuring BIND (DNS)<br /><br />You need to tell BIND that it is okay to allow other applications to update it. I added the following to my BIND configuration, everything else was left as stock Fedora Core 4. Here is my local zone details, suitably modified. Here I let BIND know which domains it can update; in my case I only have one domain to deal with. I am also loading the shared secret key at this stage. My DHCP server and DNS server are on the same box, so here I am only allowing localhost to perform the update. The file rndckey is a file containing a shared secret, so that BIND knows that it is an approved application sending instructions.<br /><br />vi /etc/named.conf<br /><br />controls {<br /> inet 127.0.0.1 allow {localhost; } keys { "rndckey"; };<br />};<br />// Add local zone definitions here.<br />zone "example.com" {<br /> type master;<br /> file "example.com.zone";<br /> allow-update { key "rndckey"; };<br /> notify yes;<br />};<br />zone "0.168.192.in-addr.arpa" {<br /> type master;<br /> file "0.168.192.in-addr.arpa.zone";<br /> allow-update { key "rndckey"; };<br /> notify yes;<br />};<br /><br />include "/etc/bind/rndc.key"; <br /><br />The secret key is created at the installation time. No need to do anything here but….<br />Note: If your DHCP and DNS servers are on separate machines you need to copy the file between them. Both machines should use the same file i.e. /etc/rndc.key.<br />2.1 Zone Files<br /><br />Set up your zone databases as normal. You do not need to do anything fancy. Because our DHCP server will update zone files as the new IP allocated to our workstation. <br /><br />vi /var/named/chroot/var/named/example.com.zone<br /><br />$TTL 86400<br />@ IN SOA @ root (<br /> 50 ; serial<br /> 28800 ; refresh (8 hours)<br /> 7200 ; retry (2 hours)<br /> 604800 ; retire (1 week)<br /> 86400 ; ttl (1 day)<br /> )<br /> IN NS server<br />server IN A 192.168.0.1 <br /><br />vi /var/named/chroot/var/named/0.168.192.in-addr.arpa.zone<br /><br />$TTL 86400<br />@ IN SOA @ root (<br /> 50 ; serial<br /> 28800 ; refresh (8 hours)<br /> 7200 ; retry (2 hours)<br /> 604800 ; retire (1 week)<br /> 86400 ; ttl (1 day)<br /> )<br /> IN NS server<br />1 IN PTR server.example.com. <br /><br />Now make shortcuts of these files in the /var/named directory with the same name.<br /><br />cd /var/named<br />ln –s /var/named/chroot/var/named/example.com.zone example.com.zone<br />ln –s /var/named/chroot/var/named/0.168.192.in-addr.arpa.zone 0.168.192.in-addr.arpa.zone<br />3 Configuring DHCP Server<br /><br />By default the DHCP server shipped in Fedora Core 4 does not do dynamic DNS update. You simply need to enable it. Below are the options I selected for my system. My dhcp configuration is as follows: <br /><br />vi /etc/dhcpd.conf<br /><br />authoritative;<br />include "/etc/rndc.key";<br /># Server configuration:<br /><br /><br />server-identifier server;<br />ddns-domainname "example.com.";<br />ddns-rev-domainname "in-addr.arpa.";<br />ddns-update-style interim;<br />ddns-updates on;<br />ignore client-updates;<br /><br /><br /># This is the communication zone<br /><br />zone example.com. {<br /> primary 127.0.0.1;<br /> key rndckey;<br />}<br /><br />default-lease-time 21600; # 6 hours<br />max-lease-time 43200; # 12 hours<br /><br /><br /># Client configuration:<br /><br />option domain-name "example.com.";<br />option ip-forwarding off; <br /><br />subnet 192.168.0.0 netmask 255.255.255.0 {<br /> range 192.168.0.100 192.168.0.200;<br /> option routers 192.168.0.1; # default gateway<br /> option subnet-mask 255.255.255.0;<br /> option broadcast-address 192.168.0.255;<br /> option domain-name-servers 192.168.0.1;<br /><br /> zone 0.168.192.in-addr.arpa. {<br /> primary 192.168.0.2;<br /> key rndckey;<br /> }<br /><br /> zone localdomain. {<br /> primary 192.168.0.2;<br /> key rndckey;<br /> } <br /><br />}<br /><br />Now execute the following change permission commands to enable named user to write the zone files whenever an name with IP updating is required.<br /><br />chmod 770 /var/named/chroot/var/named<br />chmod 770 /var/named<br /><br />Now start the services of dns and dhcp with the following command: <br /><br />service named start<br />service dhcp start<br /><br />Go to your client computers and enable them to take an IP from a DHCP server. With the following command check if your client computer name is updated in DNS. It will resolve your name with the newly allocated IP. <br /><br />nslookup yourcomputername.example.com<br /><br />Good Luck with your newly created Dynamic DNS Server.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-47888497583645257812007-05-04T03:02:00.000-07:002007-05-04T03:14:19.968-07:00The hole trick<span style="font-weight:bold;">How Skype & Co. get round firewalls</span><br /><br />Peer-to-peer software applications are a network administrator's nightmare. In order to be able to exchange packets with their counterpart as directly as possible they use subtle tricks to punch holes in firewalls, which shouldn't actually be letting in packets from the outside world.<br /><br />Increasingly, computers are positioned behind firewalls to protect systems from internet threats. Ideally, the firewall function will be performed by a router, which also translates the PC's local network address to the public IP address (Network Address Translation, or NAT). This means an attacker cannot directly address the PC from the outside - connections have to be established from the inside.<br /><br />This is of course a problem when two computers behind NAT firewalls require to talk directly to each other - if, for example, their users want to call each other using Voice over IP (VoIP). The dilemma is clear - whichever party calls the other, the recipient's firewall will decline the apparent attack and will simply discard the data packets. The telephone call doesn't happen. Or at least that's what a network administrator would expect.<br /><br /><span style="font-weight:bold;">Punched</span><br /><br />But anyone who has used the popular internet telephony software Skype knows that it works as smoothly behind a NAT firewall as it does if the PC is connected directly to the internet. The reason for this is that the inventors of Skype and similar software have come up with a solution.<br /><br />Naturally every firewall must also let packets through into the local network - after all the user wants to view websites, read e-mails, etc. The firewall must therefore forward the relevant data packets from outside, to the workstation computer on the LAN. However it only does so, when it is convinced that a packet represents the response to an outgoing data packet. A NAT router therefore keeps tables of which internal computer has communicated with which external computer and which ports the two have used.<br /><br />The trick used by VoIP software consists of persuading the firewall that a connection has been established, to which it should allocate subsequent incoming data packets. The fact that audio data for VoIP is sent using the connectionless UDP protocol acts to Skype's advantage. In contrast to TCP, which includes additional connection information in each packet, with UDP, a firewall sees only the addresses and ports of the source and destination systems. If, for an incoming UDP packet, these match an NAT table entry, it will pass the packet on to an internal computer with a clear conscience.<br /><br /><span style="font-weight:bold;">Switching</span><br /><br />The switching server, with which both ends of a call are in constant contact, plays an important role when establishing a connection using Skype. This occurs via a TCP connection, which the clients themselves establish. The Skype server therefore always knows under what address a Skype user is currently available on the internet. Where possible the actual telephone connections do not run via the Skype server; rather, the clients exchange data directly.<br /><br />Let's assume that Alice wants to call her friend Bob. Her Skype client tells the Skype server that she wants to do so. The Skype server already knows a bit about Alice. From the incoming query it sees that Alice is currently registered at the IP address 1.1.1.1 and a quick test reveals that her audio data always comes from UDP port 1414. The Skype server passes this information on to Bob's Skype client, which, according to its database, is currently registered at the IP address 2.2.2.2 and which, by preference uses UDP port 2828.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz_kXyYokZzkY1qBOXSHxkrkW3qH63hA9hE8ZrUTrx_XpNFsk4N2lnX-MAYqa9jfOymlsgbst0X29NYrXBBFfNpcdT_i9KgJ1JmIoRKXZs0X6uE6UblrH_cDV-sPjQfk7jGa6SvbLk964u/s1600-h/0.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz_kXyYokZzkY1qBOXSHxkrkW3qH63hA9hE8ZrUTrx_XpNFsk4N2lnX-MAYqa9jfOymlsgbst0X29NYrXBBFfNpcdT_i9KgJ1JmIoRKXZs0X6uE6UblrH_cDV-sPjQfk7jGa6SvbLk964u/s320/0.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5060644816544661410" /></a><br />Step 1: Alice tries to call Bob, which signals Skype.<br /><br />Bob's Skype program then punches a hole in its own network firewall: It sends a UDP packet to 1.1.1.1 port 1414. This is discarded by Alice's firewall, but Bob's firewall doesn't know that. It now thinks that anything which comes from 1.1.1.1 port 1414 and is addressed to Bob's IP address 2.2.2.2 and port 2828 is legitimate - it must be the response to the query which has just been sent.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT9xrjtWzdp2q4cmrHwM-7LmJhOb3WPcyNkpsUQEY639rWxIzTDaZWQ9HtA4dJahJORGbkKsx1rW3P_PgHkvcqTdf9E_gco1GqwqqxCP6UlsLKEYIWxfVRnanmMOM9bGPqywu1CZLwOZNs/s1600-h/1.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT9xrjtWzdp2q4cmrHwM-7LmJhOb3WPcyNkpsUQEY639rWxIzTDaZWQ9HtA4dJahJORGbkKsx1rW3P_PgHkvcqTdf9E_gco1GqwqqxCP6UlsLKEYIWxfVRnanmMOM9bGPqywu1CZLwOZNs/s320/1.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5060645821567008690" /></a> <br />Step 2: Bob tries to reach Alice, which punches a hole through Bob's Firewall.<br /><br />Now the Skype server passes Bob's coordinates on to Alice, whose Skype application attempts to contact Bob at 2.2.2.2:2828. Bob's firewall sees the recognised sender address and passes the apparent response on to Bob's PC - and his Skype phone rings.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyuy5JfUST2vtl2VWUv9SjAriLCpW-YGZlMeQkmX5uxEE96rJB8YFuKuSGJ3-zMQc98krBXzbW9yQQgalU8wFQKia0CuyscCtIQSfxOuErF-hkK73I9GLy9SkbVXjxYgF-SWASsRMNXhQE/s1600-h/2.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyuy5JfUST2vtl2VWUv9SjAriLCpW-YGZlMeQkmX5uxEE96rJB8YFuKuSGJ3-zMQc98krBXzbW9yQQgalU8wFQKia0CuyscCtIQSfxOuErF-hkK73I9GLy9SkbVXjxYgF-SWASsRMNXhQE/s320/2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5060646216703999938" /></a> <br />Step 3: Alice finally reaches Bobs computer through the hole.<br /><br /><span style="font-weight:bold;"><br />Doing the rounds</span><br /><br />This description is of course somewhat simplified - the details depend on the specific properties of the firewalls used. But it corresponds in principle to our observations of the process of establishing a connection between two Skype clients, each of which was behind a Linux firewall. The firewalls were configured with NAT for a LAN and permitted outgoing UDP traffic.<br /><br />Linux' NAT functions have the VoIP friendly property of, at least initially, not changing the ports of outgoing packets. The NAT router merely replaces the private, local IP address with its own address - the UDP source port selected by Skype is retained. Only when multiple clients on the local network use the same source port does the NAT router stick its oar in and reset the port to a previously unused value. This is because each set of two IP addresses and ports must be able to be unambiguously assigned to a connection between two computers at all times. The router will subsequently have to reconstruct the internal IP address of the original sender from the response packet's destination port.<br /><br />Other NAT routers will try to assign ports in a specific range, for example ports from 30,000 onwards, and translate UDP port 1414, if possible, to 31414. This is, of course, no problem for Skype - the procedure described above continues to work in a similar manner without limitations.<br /><br />It becomes a little more complicated if a firewall simply assigns ports in sequence, like Check Point's FireWall-1: the first connection is assigned 30001, the next 30002, etc. The Skype server knows that Bob is talking to it from port 31234, but the connection to Alice will run via a different port. But even here Skype is able to outwit the firewall. It simply runs through the ports above 31234 in sequence, hoping at some point to stumble on the right one. But if this doesn't work first go, Skype doesn't give up. Bob's Skype opens a new connection to the Skype server, the source port of which is then used for a further sequence of probes.<br /><br />Nevertheless, in very active networks Alice may not find the correct, open port. The same also applies for a particular type of firewall, which assigns every new connection to a random source port. The Skype server is then unable to tell Alice where to look for a suitable hole in Bob's firewall.<br /><br />However, even then, Skype doesn't give up. In such cases a Skype server is then used as a relay. It accepts incoming connections from both Alice and Bob and relays the packets onwards. This solution is always possible, as long as the firewall permits outgoing UDP traffic. It involves, however, an additional load on the infrastructure, because all audio data has to run through Skype's servers. The extended packet transmission times can also result in an unpleasant delay.<br /><br />Use of the procedure described above is not limited to Skype and is known as "UDP hole punching". Other network services such as the Hamachi gaming VPN application, which relies on peer-to-peer communication between computers behind firewalls, use similar procedures. A more developed form has even made it to the rank of a standard - RFC 3489 "Simple Traversal of UDP through NAT" (STUN) describes a protocol which with two STUN clients can get around the restrictions of NAT with the help of a STUN server in many cases. The draft Traversal Using Relay NAT (TURN) protocol describes a possible standard for relay servers.<br /><br /><br /><span style="font-weight:bold;">DIY hole punching</span><br /><br />With a few small utilities, you can try out UDP hole punching for yourself. The tools required, hping2 and netcat, can be found in most Linux distributions. Local is a computer behind a Linux firewall (local-fw) with a stateful firewall which only permits outgoing (UDP) connections. For simplicity, in our test the test computer remote was connected directly to the internet with no firewall.<br /><br />Firstly start a UDP listener on UDP port 14141 on the local/1 console behind the firewall:<br /><br />local/1# nc -u -l -p 14141<br /><br />An external computer "remote" then attempts to contact it.<br /><br />remote# echo "hello" | nc -p 53 -u local-fw 14141<br /><br />However, as expected nothing is received on local/1 and, thanks to the firewall, nothing is returned to remote. Now on a second console, local/2, hping2, our universal tool for generating IP packets, punches a hole in the firewall:<br /><br />local/2# hping2 -c 1 -2 -s 14141 -p 53 remote<br /><br />As long as remote is behaving itself, it will send back a "port unreachable" response via ICMP - however this is of no consequence. On the second attempt<br /><br />remote# echo "hello" | nc -p 53 -u local-fw 14141<br /><br />the netcat listener on console local/1 then coughs up a "hello" - the UDP packet from outside has passed through the firewall and arrived at the computer behind it.<br /><br />Network administrators who do not appreciate this sort of hole in their firewall and are worried about abuse, are left with only one option - they have to block outgoing UDP traffic, or limit it to essential individual cases. UDP is not required for normal internet communication anyway - the web, e-mail and suchlike all use TCP. Streaming protocols may, however, encounter problems, as they often use UDP because of the reduced overhead.<br /><br />Astonishingly, hole punching also works with TCP. After an outgoing SYN packet the firewall / NAT router will forward incoming packets with suitable IP addresses and ports to the LAN even if they fail to confirm, or confirm the wrong sequence number (ACK). Linux firewalls at least, clearly fail to evaluate this information consistently. Establishing a TCP connection in this way is, however, not quite so simple, because Alice does not have the sequence number sent in Bob's first packet. The packet containing this information was discarded by her firewall.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-58861322205659857032007-05-03T23:11:00.000-07:002007-05-03T23:12:14.156-07:00Why command df and du reports different output?You will never notice something like this on FreeBSD or Linux Desktop home system or your personal UNIX or Linux workstation. However, sometime on a production UNIX server you will notice that both df (display free disk space) and du (display disk usage statistics) reporting different output. Usually df will output a bigger disk usage than du.<br /><br />If Linux or UNIX inode is deallocated you will see this problem. If you are using clustered system (file system such as GFS) you may see this scenario commonly.<br /><br />Note following examples are FreeBSD and GNU/Linux specific.<br /><br />Following is normal output of df and du for /tmp filesystem:<br /><br /> # df -h /tmp<br /><br />Output:<br /><br /> Filesystem Size Used Avail Capacity Mounted on<br /> /dev/ad0s1e 496M 22M 434M 5% /tmp<br /><br />Now type du command:<br /><br /> # du -d 0 -h /tmp/<br /><br />Output:<br /><br /> 22M /tmp/<br /><br />Why is there a mismatch between df and du outputs?<br /><br />However, some time it reports different output (a bigger disk usage), for example:<br /><br /> # df -h /tmp/<br /><br />Output:<br /><br /> Filesystem Size Used Avail Capacity Mounted on<br /> /dev/ad0s1e 496M 39M 417M 9% /tmp<br /><br />Now type du command:<br /><br /> 1. du -d 0 -h /tmp/ <br /><br />Output:<br /><br /> 22M /tmp/<br /><br />As you see, both df and du reporting different output. Many new UNIX admin get confused with output (39M vs 22M).<br /><br />Open file descriptor is main causes of such wrong information. For example if file called /tmp/application.log is open by third party application OR by a user and same file is deleted, both df and du reports different output. You can use lsof command to verify this:<br /><br /> # lsof | grep tmp<br /><br />Output:<br /><br /> bash 594 root cwd VDIR 0,86 512 2 /tmp<br /> bash 634 root cwd VDIR 0,86 512 2 /tmp<br /> pwebd 635 root cwd VDIR 0,86 512 2 /tmp<br /> pwebd 635 root 3rW VREG 0,86 17993324 68 /tmp (/dev/ad0s1e)<br /> pwebd 635 root 5u VREG 0,86 0 69 /tmp (/dev/ad0s1e)<br /> lsof 693 root cwd VDIR 0,86 512 2 /tmp<br /> grep 694 root cwd VDIR 0,86 512 2 /tmp<br /><br />You can see 17993324K file is open on /tmp by pwebd (our in house software) but deleted accidentally by me. You can recreate above scenario in your Linux, FreeBSD or Unixish system as follows:<br /><br />First, note down /home file system output:<br /><br /> # df -h /home<br /> # du -d 0 -h /home<br /><br />If you are using Linux then use du as follows:<br /><br /> # du -s -h /tmp<br /><br />Now create a big file:<br /><br /> # cd /home/user<br /> # cat /bin/* >> demo.txt<br /> # cat /sbin/* >> demo.txt<br /><br />Login on other console and open file demo.txt using vi text editor:<br /><br /> # vi /home/user/demo.txt<br /><br />Do not exit from vi (keep it running).<br /><br />Go back to another console and remove file demo.txt<br /><br /> # rm demo.txt<br /><br />Now run both du and df to see the difference.<br /><br /> # df -h /home<br /> # du -d 0 -h /home<br /><br />If you are using Linux then use du as follows:<br /><br /> # du -s -h /tmp<br /><br />Login to another terminal and close vi.<br /><br />Now close the vi and the root cause of the problem should be resoled, the du and df outputs should be correct.Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-18950223781286009822007-05-03T23:06:00.000-07:002007-05-03T23:08:53.850-07:00Linux Tips!!TIP 1:<br /><br /> Is NTP Working?<br /><br /> STEP 1 (Test the current server):<br /><br /> Try issuing the following command:<br /><br /> $ ntpq -pn<br /><br /> remote refid st t when poll reach delay offset jitter<br /> ===================================================<br /> tock.usno.navy 0.0.0.0 16 u - 64 0 0.000 0.000 4000.00<br /><br /> The above is an example of a problem.<br /> Compare it to a working configuration.<br /><br /> $ ntpq -pn<br /><br /> remote refid st t when poll reach delay offset jitter<br /> ========================================================<br /> +128.4.40.12 128.4.40.10 2 u 107 128 377 25.642 3.350 1.012<br /> 127.127.1.0 127.127.1.0 10 l 40 64 377 0.000 0.000 0.008<br /> +128.91.2.13 128.4.40.12 3 u 34 128 377 21.138 6.118 0.398<br /> *192.5.41.41 .USNO. 1 u 110 128 377 33.69 9.533 3.534<br /><br /> STEP 2 (Configure the /etc/ntp.conf):<br /><br /> $ cat /etc/ntp.conf<br /><br /> # My simple client-only ntp configuration.<br /> server timeserver1.upenn.edu<br /> # ping -a timeserver1.upenn.edu shows the IP address 128.91.2.13<br /> # which is used in the restrict below<br /> restrict 128.91.2.13<br /> server tock.usno.navy.mil<br /> restrict 192.5.41.41<br /> server 128.4.40.12<br /> restrict 128.4.40.12<br /> server 127.127.1.0 # local clock<br /> fudge 127.127.1.0 stratum 10<br /> driftfile /etc/ntp/drift<br /> restrict default ignore<br /> restrict 127.0.0.0 mask 255.0.0.0<br /> authenticate no<br /><br /> STEP 3 (Configure /etc/ntp/step-tickers):<br /><br /> The values for server above are placed in the "/etc/ntp/step-tickers" file<br /><br /> $ cat /etc/ntp/step-tickers<br /><br /> timeserver1.upenn.edu<br /> tock.usno.navy.mil<br /> 128.4.40.12<br /><br /> The startup script /etc/rc.d/init.d/ntpd will grab the servers in this<br /> file and execute the ntpdate command as follows:<br /><br /> /usr/sbin/ntpdate -s -b -p 8 timeserver1.upenn.edu<br /><br /> Why? Because if the time is off ntpd will not start. The command above set the<br /> clock. If System Time deviates from true time by more than 1000 seconds, then,<br /> the ntpd daemon will enter panic mode and exit.<br /><br /> STEP 4 (Restart the service and check):<br /><br /> Issue the restart command<br /><br /> /etc/init.d/ntpd restart<br /><br /> check the values for "ntpq -pn",<br /> which should match step 1.<br /><br /> ntpq -pn<br /><br /> SPECIAL NOTE:<br /><br /> Time is always stored in the kernel as the number of seconds since<br /> midnight of the 1st of January 1970 UTC, regardless of whether the<br /> hardware clock is stored as UTC or not. Conversions to local time<br /> are done at run-time. So, it's easy to get the time in different<br /> timezones for only the current session as follows:<br /><br /><br /> $ export TZ=EST<br /> $ date<br /> Mon Aug 2 10:34:04 EST 2004<br /><br /> $ export TZ=NET<br /> $ date<br /> Mon Aug 2 15:34:18 NET 2004<br /><br /> The following are possible values for TZ:<br /><br /> Hours From Greenwich Mean Time (GMT) Value Description<br /> 0 GMT Greenwich Mean Time<br /> +1 ECT European Central Time<br /> +2 EET European Eastern Time<br /> +2 ART<br /> +3 EAT Saudi Arabia<br /> +3.5 MET Iran<br /> +4 NET<br /> +5 PLT West Asia<br /> +5.5 IST India<br /> +6 BST Central Asia<br /> +7 VST Bangkok<br /> +8 CTT China<br /> +9 JST Japan<br /> +9.5 ACT Central Australia<br /> +10 AET Eastern Australia<br /> +11 SST Central Pacific<br /> +12 NST New Zealand<br /> -11 MIT Samoa<br /> -10 HST Hawaii<br /> -9 AST Alaska<br /> -8 PST Pacific Standard Time<br /> -7 PNT Arizona<br /> -7 MST Mountain Standard Time<br /> -6 CST Central Standard Time<br /> -5 EST Eastern Standard Time<br /> -5 IET Indiana East<br /> -4 PRT Atlantic Standard Time<br /> -3.5 CNT Newfoundland<br /> -3 AGT Eastern South America<br /> -3 BET Eastern South America<br /> -1 CAT Azores<br /><br /> DST timezone<br /><br /><br /> 0 BST for British Summer.<br /> +400 ADT for Atlantic Daylight.<br /> +500 EDT for Eastern Daylight.<br /> +600 CDT for Central Daylight.<br /> +700 MDT for Mountain Daylight.<br /> +800 PDT for Pacific Daylight.<br /> +900 YDT for Yukon Daylight.<br /> +1000 HDT for Hawaii Daylight.<br /> -100 MEST for Middle European Summer,<br /> MESZ for Middle European Summer,<br /> SST for Swedish Summer and FST for French Summer.<br /> -700 WADT for West Australian Daylight.<br /> -1000 EADT for Eastern Australian Daylight.<br /> -1200 NZDT for New Zealand Daylight.<br /><br /> The following is an example of setting the TZ environment variable<br /> for the timezone, only when timezone changes go into effect.<br /><br /> $ export TZ=EST+5EDT,M4.1.0/2,M10.5.0/2<br /><br /> Take a look at the last line "M10.5.0/2". What does it mean? Here is the<br /> documentation<br /><br /><br /> Mm.w.d This specifies day d (0 <= d <= 6) of week w (1 <= w <= 5) of<br /> month m (1 <= m <= 12). Week 1 is the first week in which day d<br /> occurs and week 5 is the last week in which day d occurs. Day 0<br /> is a Sunday.<br /><br /> The time fields specify when, in the local time currently in<br /> effect, the change to the other time occurs. If omitted,<br /> the default is 02:00:00.<br /><br /> So this is what it means. M10 stands for October, the 5 is the fifth week<br /> that includes a Sunday (note 0 in M10.5.0/2 is Sunday). To see that it is<br /> the fifth week see the calendar below. The time change occurs a 2am in<br /> the morning. (Special Note: In 2007, DST was extended. See TIP 230).<br /><br /> October<br /> Su Mo Tu We Th Fr Sa<br /> 1 2<br /> 3 4 5 6 7 8 9<br /> 10 11 12 13 14 15 16<br /> 17 18 19 20 21 22 23<br /> 24 25 26 27 28 29 30<br /> 31<br /><br /> Prove it. Take the following program sunrise, which can calcuates sunrise<br /> and sunset for an latitude and longitude. This program can be downloaded<br /> from the following location:<br /> http://sourceforge.net/direct-dl/mchirico/souptonuts/working_with_time.tar.gz<br /><br /> Below is a bash script that will run the program for the next 100 days.<br /><br /> #!/bin/bash<br /> # program: next100days Mike Chirico<br /> # download:<br /> # http://sourceforge.net/direct-dl/mchirico/souptonuts/working_with_time.tar.gz<br /> #<br /> # This will calculate the sunrise and sunset for<br /> # latitude 39.95 Note must convert to degrees<br /> # longitude 75.15 Note must convert to degrees<br /> lat=39.95<br /> long=75.15<br /> for (( i=0; i <= 100; i++))<br /> do<br /> sunrise `date -d "+$i day" "+%Y %m %d"` $lat $long<br /> done<br /><br /> Take a look at the following sample output.<br /><br /> $ export TZ=EST+5EDT,M4.1.0/2,M10.5.0/2<br /> $ ./next100days<br /><br /> Sunrise 08-24-2004 06:21:12 Sunset 08-24-2004 19:43:42<br /> Sunrise 08-25-2004 06:22:09 Sunset 08-25-2004 19:42:12<br /> Sunrise 08-26-2004 06:23:06 Sunset 08-26-2004 19:40:41<br /> Sunrise 08-27-2004 06:24:03 Sunset 08-27-2004 19:39:09<br /> Sunrise 08-28-2004 06:25:00 Sunset 08-28-2004 19:37:37<br /> Sunrise 08-29-2004 06:25:56 Sunset 08-29-2004 19:36:04<br /> Sunrise 08-30-2004 06:26:53 Sunset 08-30-2004 19:34:31<br /> Sunrise 08-31-2004 06:27:50 Sunset 08-31-2004 19:32:57<br /> Sunrise 09-01-2004 06:28:46 Sunset 09-01-2004 19:31:22<br /> Sunrise 09-02-2004 06:29:43 Sunset 09-02-2004 19:29:47<br /> ..[values omitted ]<br /> Sunrise 10-28-2004 07:25:31 Sunset 10-28-2004 18:02:34<br /> Sunrise 10-29-2004 07:26:38 Sunset 10-29-2004 18:01:19<br /> Sunrise 10-30-2004 07:27:46 Sunset 10-30-2004 18:00:06<br /> Sunrise 10-31-2004 06:28:53 Sunset 10-31-2004 16:58:54<br /> Sunrise 11-01-2004 06:30:01 Sunset 11-01-2004 16:57:44<br /> Sunrise 11-02-2004 06:31:10 Sunset 11-02-2004 16:56:35<br /><br /> Compare 10-30-2004 with 10-31-2004. Sunrise is an hour earlier because<br /> daylight saving time has ended, just as predicted.<br /><br /> There is an easier way to switch between timezones. Take a look at the<br /> directory zoneinfo as follows:<br /><br /> $ ls /usr/share/zoneinfo<br /><br /> Africa Chile Factory Iceland Mexico posix UCT<br /> America CST6CDT GB Indian Mideast posixrules Universal<br /> Antarctica Cuba GB-Eire Iran MST PRC US<br /> Arctic EET GMT iso3166.tab MST7MDT PST8PDT UTC<br /> Asia Egypt GMT0 Israel Navajo right WET<br /> Atlantic Eire GMT-0 Jamaica NZ ROC W-SU<br /> Australia EST GMT+0 Japan NZ-CHAT ROK zone.tab<br /> Brazil EST5EDT Greenwich Kwajalein Pacific Singapore Zulu<br /> Canada Etc Hongkong Libya Poland SystemV<br /> CET Europe HST MET Portugal Turkey<br /><br /> TZ can be set to any one of these files. Some of these are directories and contain<br /> subdirectories, such as ./posix/America. This way you don not have to enter the<br /> timezone, offset, and range for dst, since it has already been calculated.<br /><br /> $ export TZ=:/usr/share/zoneinfo/posix/America/Aruba<br /> $ export TZ=:/usr/share/zoneinfo/Egypt<br /><br /><br /> Reference:<br /> http://prdownloads.sourceforge.net/cpearls/date_calc.tar.gz?download<br /><br /> Also see (TIP 27).<br /> Also see (TIP 103) using chrony which is very similiar to ntpd.<br /> Note time settings can usually be found in /etc/sysconfig/clock<br /> <br /><br /><br /><br />TIP 2:<br /><br /> cpio works like tar, only better.<br /><br /> STEP 1 (Create two directories with data ../dir1 an ../dir2)<br /><br /> mkdir -p ../dir1<br /> mkdir -p ../dir2<br /> cp /etc/*.conf ../dir1/.<br /> cp /etc/*.cnf ../dir2/.<br /><br /> Which will backup all your cnf and conf files.<br /><br /> STEP 2 (Piping the files to tar)<br /><br /> cpio works like tar but can take input<br /> from the "find" command.<br /><br /> $ find ../dir1/ | cpio -o --format=tar > test.tar<br /> or<br /> $ find ../dir1/ | cpio -o -H tar > test2.tar<br /><br /> Same command without the ">"<br /><br /> $ find ../dir1/ | cpio -o --format=tar -F test.tar<br /> or<br /> $ find ../dir1/ | cpio -o -H tar -F test2.tar<br /><br /> Using append<br /><br /> $ find ../dir1/ | cpio -o --format=tar -F test.tar<br /> or<br /> $ find ../dir2/ | cpio -o --format=tar --append -F test.tar<br /><br /> STEP 3 (List contents of the tar file)<br /><br /> $ cpio -it < test.tar<br /> or<br /> $ cpio -it -F test.tar<br /><br /> STEP 4 (Extract the contents)<br /><br /> $ cpio -i -F test.tar<br /><br /><br /><br />TIP 3:<br /><br /> Working with tar. The basics with encryption.<br /><br /> STEP 1 (Using the tar command on the directory /stuff)<br /><br /> Suppose you have a directory /stuff<br /> To tar everything in stuff to create a ".tar" file.<br /><br /> $ tar -cvf stuff.tar stuff<br /><br /> Which will create "stuff.tar".<br /><br /> STEP 2 (Using the tar command to create a ".tar.gz" of /stuff)<br /><br /> $ tar -czf stuff.tar.gz stuff<br /><br /> STEP 3 (List the files in the archive)<br /><br /> $ tar -tzf stuff.tar.gz<br /> or<br /> $ tar -tf stuff.tar<br /><br /> STEP 4 (A way to list specific files)<br /><br /> Note, pipe the results to a file and edit<br /><br /> $ tar -tzf stuff.tar.gz > mout<br /><br /> Then, edit mout to only include the files you want<br /><br /> $ tar -T mout -xzf stuff.tar.gz<br /><br /> The above command will only get the files in mout.<br /> Of couse, if you want them all<br /><br /> $ tar -xzf stuff.tar.gz<br /><br /> STEP 5 (ENCRYPTION)<br /><br /> $ tar -zcvf - stuff|openssl des3 -salt -k secretpassword | dd of=stuff.des3<br /><br /> This will create stuff.des3...don't forget the password you<br /> put in place of secretpassword. This can be done interactively as<br /> well.<br /><br /> $ dd if=stuff.des3 |openssl des3 -d -k secretpassword|tar zxf -<br /><br /> NOTE: above there is a "-" at the end... this will<br /> extract everything.<br /><br /><br /><br />TIP 4:<br /><br /> Creating a Virtual File System and Mounting it with a Loopback Device.<br /><br /> STEP 1 (Construct a 10MB file)<br /><br /> $ dd if=/dev/zero of=/tmp/disk-image count=20480<br /><br /> By default dd uses block of 512 so the size will be 20480*512<br /><br /> STEP 2 (Make an ext2 or ext3 file system) -- ext2 shown here.<br /><br /> $ mke2fs -q<br /><br /> or if you want ext3<br /><br /> $ mkfs -t ext3 -q /tmp/disk-image<br /><br /> yes, you can even use reiser, but you'll need to create a bigger<br /> disk image. Something like "dd if=/dev/zero of=/tmp/disk-image count=50480".<br /><br /> $ mkfs -t reiserfs -q /tmp/disk-image<br /><br /> Hit yes for confirmation. It only asks this because it's a file<br /><br /><br /> STEP 3 (Create a directory "virtual-fs" and mount. This has to be done as root)<br /><br /> $ mkdir /virtual-fs<br /> $ mount -o loop=/dev/loop0 /tmp/disk-image /virtual-fs<br /><br /> SPECIAL NOTE: if you mount a second device you will have to increase the<br /> loop count: loop=/dev/loop1, loop=/dev/loop2, ... loop=/dev/loopn<br /><br /> Now it operates just like a disk. This virtual filesystem can be mounted<br /> when the system boots by adding the following to the "/etc/fstab" file. Then,<br /> to mount, just type "mount /virtual-fs".<br /><br /> /tmp/disk-image /virtual-fs ext2 rw,loop=/dev/loop0 0 0<br /><br /> STEP 4 (When done, umount it)<br /><br /> $ umount /virtual-fs<br /><br /><br /> SPECIAL NOTE: If you are using Fedora core 2, in the /etc/fstab you can take<br /> advantage of acl properties for this mount. Note the acl next to the<br /> rw entry. This is shown here with ext3.<br /><br /> /tmp/disk-image /virtual-fs ext3 rw,acl,loop=/dev/loop1 0 0<br /><br /> Also, if you are using Fedora core 2 and above, you can mount the file<br /> on a cryptoloop.<br /><br /> $ dd if=/dev/urandom of=disk-aes count=20480<br /><br /><br /> $ modprobe loop<br /> $ modprobe cryptoloop<br /> $ modprobe aes<br /><br /> $ losetup -e aes /dev/loop0 disk-aes<br /> $ mkfs -t ext2 /dev/loop0<br /> $ mount -o loop,encryption=aes disk-aes <mount><br /><br /><br /> If you do not have Fedora core 2, then, you can build the kernel from source<br /> with some of the following options (not complete, yet)<br /> reference:<br /> http://cvs.sourceforge.net/viewcvs.py/cpearls/cpearls/src/posted_on_sf/acl/ehd.pdf?rev=1.1&view=log<br /><br /> Cryptographic API Support (CONFIG_CRYPTO)<br /> generic loop cryptographic (CONFIG_CRYPTOLOOP)<br /> Cryptographic ciphers (CONFIG_CIPHERS)<br /> Enable one or more ciphers (CONFIG CIPHER .*) such as AES.<br /><br /><br /> HELPFUL INFORMATION: It is possible to bind mount partitions, or associate the<br /> mounted partition to a directory name.<br /><br /> # mount --bind /virtual-fs /home/mchirico/vfs<br /><br /> Also, if you want to see what filesystems are currently mounted, "cat" the<br /> file "/etc/mtab"<br /><br /> $ cat /etc/mtab<br /><br /> Also see TIP 91.<br /><br /><br /><br />TIP 5:<br /><br /> Setting up 2 IP address on "One" NIC. This example is on ethernet.<br /><br /> STEP 1 (The settings for the initial IP address)<br /><br /> $ cat /etc/sysconfig/network-scripts/ifcfg-eth0<br /><br /> DEVICE=eth0<br /> BOOTPROTO=static<br /> BROADCAST=192.168.99.255<br /> IPADDR=192.168.1.155<br /> NETMASK=255.255.252.0<br /> NETWORK=192.168.1.0<br /> ONBOOT=yes<br /><br /> STEP 2 (2nd IP address: )<br /><br /> $ cat /etc/sysconfig/network-scripts/ifcfg-eth0:1<br /><br /> DEVICE=eth0:1<br /> BOOTPROTO=static<br /> BROADCAST=192.168.99.255<br /> IPADDR=192.168.1.182<br /> NETMASK=255.255.252.0<br /> NETWORK=192.168.1.0<br /> ONBOOT=yes<br /><br /> SUMMARY Note, in STEP 1 the filename is "ifcfg-eth0", whereas in<br /> STEP 2 it's "ifcfg-eth0:1" and also not the matching<br /> entries for "DEVICE=...". Also, obviously, the<br /> "IPADDR" is different as well.<br /><br /><br /><br />TIP 6:<br /><br /> Sharing Directories Among Several Users.<br /><br /> Several people are working on a project in "/home/share"<br /> and they need to create documents and programs so that<br /> others in the group can edit and execute these documents<br /> as needed. Also see (TIP 186) for adding existing users<br /> to groups.<br /><br /> $ /usr/sbin/groupadd share<br /> $ chown -R root.share /home/share<br /> $ /usr/bin/gpasswd -a <username> share<br /> $ chmod 2775 /home/share<br /><br /> $ ls -ld /home/share<br /> drwxrwsr-x 2 root share 4096 Nov 8 16:19 /home/share<br /> ^---------- Note the s bit, which was set with the chmod 2775<br /><br /> $ cat /etc/group<br /> ...<br /> share:x:502:chirico,donkey,zoe<br /> ... ^------- users are added to this group.<br /><br /> The user may need to login again to get access. Or, if the user is currently<br /> logged in, they can run the following command:<br /><br /> $ su - <username><br /><br /> Note, the above step is recommended over "newgrp - share" since currently<br /> newgrp in FC2,FC3, and FC4 gets access to the group but the umask is not<br /> correctly formed.<br /><br /> As root you can test their account.<br /><br /> $ su - <username> "You need to '-' to pickup thier environment '$ su - chirico' "<br /><br /> Note: SUID, SGID, Sticky bit. Only the left most octet is examined, and "chmod 755" is used<br /> as an example of the full command. But, anything else could be used as well. Normally<br /> you'd want executable permissions.<br /><br /> Octal digit Binary value Meaning Example usage<br /> 0 000 all cleared $ chmod 0755 or chmod 755<br /> 1 001 sticky $ chmod 1755<br /> 2 010 setgid $ chmod 2755<br /> 3 011 setgid, sticky $ chmod 3755<br /> 4 100 setuid $ chmod 4755<br /> 5 101 setuid, sticky $ chmod 5755<br /> 6 110 setuid, setgid $ chmod 6755<br /> 7 111 setuid, setgid, sticky $ chmod 7755<br /><br /> A few examples applied to a directory below. In the first example all users in the group can<br /> add files to directory "dirA" and they can delete their own files. Users cannot delete other<br /> user's files.<br /><br /> Sticky bit:<br /> $ chmod 1770 dirA<br /><br /> Below files created within the directory have the group ID of the directory, rather than that<br /> of the default group setting for the user who created the file.<br /><br /> Set group ID bit:<br /> $ chmod 2755 dirB<br /><br /><br /><br /><br />TIP 7:<br /><br /> Getting Infomation on Commands<br /><br /> The "info" is a great utility for getting information about the system.<br /> Here's a quick key on using "info" from the terminal prompt.<br /><br /> 'q' exits.<br /> 'u' moves up to the table of contents of the current section.<br /> 'n' moves to the next chapter.<br /> 'p' moves to the previous chapter.<br /> 'space' goes into the selected section.<br /><br /><br /> The following is a good starting point:<br /><br /> $ info coreutils<br /><br /> Need to find out what a certain program does?<br /><br /> $ whatis open<br /> open (2) - open and possibly create a file or device<br /> open (3) - perl pragma to set default PerlIO layers for input and output<br /> open (3pm) - perl pragma to set default PerlIO layers for input and output<br /> open (n) - Open a file-based or command pipeline channel<br /><br /> To get specific information about the open commmand<br /><br /> $ man 2 open<br /><br /> also try 'keyword' search, which is the same as the apropos command.<br /> For example, to find all the man pages on selinux, type the following:<br /><br /> $ man -k selinux<br /><br /> or the man full word search. Same as whatis command.<br /><br /> $ man -f <some><br /><br /> This is a hint once you are inside man.<br /><br /> space moves forward one page<br /> b moves backward<br /> y scrolls up one line "yikes, I missed it!"<br /> g goes to the beginning<br /> q quits<br /> /<string> search, repeat seach n<br /> m mark, enter a letter like "a", then, ' to go back<br /> ' enter a letter that is marked.<br /><br /><br /><br /> To get section numbers<br /><br /> $ man 8 ping<br /><br /> Note the numbers are used as follows<br /> (This is OpenBSD)<br /><br /> 1 General Commands<br /> 2 System Calls and Error Numbers<br /> 3 C Libraries<br /> 3p perl<br /> 4 Devices and device drivers<br /> 5 File Formats and config files<br /> 6 Game instructions<br /> 7 Miscellaneous information<br /> 8 System maintenance<br /> 9 Kernel internals<br /><br /> To find the man page directly, "ls" command:<br /><br /> $ whereis -m ls<br /> ls: /usr/share/man/man1/ls.1.gz /usr/share/man/man1/ls.1 /usr/share/man/man1p/ls.1p<br /><br /> To read this file directly, do the following:<br /><br /> $ man /usr/share/man/man1/ls.1.gz<br /><br /> If you want to know the manpath, execute manpath.<br /><br /> $ manpath<br /> /usr/share/man:/usr/X11R6/man:/usr/local/share/man:/usr/local/pgsql/man:/usr/man:/usr/local/man<br /><br /><br /><br />TIP 8:<br /><br /> How to Put a "Running Job" in the Background.<br /><br /> You're running a job at the terminal prompt, and it's taking<br /> a very long time. You want to put the job in the backgroud.<br /><br /> "CTL - z" Temporarily suspends the job<br /> $ jobs This will list all the jobs<br /> $ bg %jobnumber (bg %1) To run in the background<br /> $ fg %jobnumber To bring back in the foreground<br /><br /> Need to kill all jobs -- say you're using several suspended<br /> emacs sessions and you just want everything to exit.<br /><br /> $ kill -9 `jobs -p`<br /><br /> The "jobs -p" gives the process number of each job, and the<br /> kill -9 kills everything. Yes, sometimes "kill -9" is excessive<br /> and you should issue a "kill -15" that allows jobs to clean-up.<br /> However, for exacs session, I prefer "kill -9" and haven't had<br /> a problem.<br /><br /> Sometimes you need to list the process id along with job<br /> information. For instance, here's process id with the listing.<br /><br /> $ jobs -l<br /><br /> Note you can also renice a job, or give it lower priority.<br /><br /> $ nice -n +15 find . -ctime 2 -type f -exec ls {} \; > last48hours<br /> ^z<br /> $ bg<br /><br /> So above that was a ctl-z to suppend. Then, bg to run it in<br /> the background. Now, if you want to change the priority lower<br /> you just renice it, once you know the process id.<br /><br /> $ jobs -pl<br /> [1]+ 29388 Running nice -n +15 find . -ctime 2 -exec ls -l {} \; >mout &<br /><br /> $ renice +30 -p 29388<br /> 29388: old priority 15, new priority 19<br /><br /> 19 was the lowest priority for this job. You cannot increase<br /> the priority unless you are root.<br /><br /><br /><br />TIP 9:<br /><br /> Need to Delete a File for Good -- not even GOD can recover.<br /><br /> You have a file "secret". The following makes it so no one<br /> can read it. If the file was 12 bytes, it's now 4096 after it<br /> has been over written 100 times. There's no way to recover this.<br /><br /> $ shred -n 100 -z secret<br /><br /> Want to remove the file? Use the "u" option.<br /><br /> $ shred -n 100 -z -u test2<br /><br /> It can be applied to a device<br /><br /> $ shred -n 100 -z -u /dev/fd0<br /><br /><br /> CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data<br /> in place. This is the traditional way to do things, but many modern file system designs do not sat-<br /> isfy this assumption. The following are examples of file systems on which shred is not effective, or<br /> is not guaranteed to be effective in all file system modes:<br /><br /> * log-structured or journaled file systems, such as those supplied with<br /><br /> AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)<br /><br /><br /><br /> Also see (TIP 52).<br /><br /><br /><br />TIP 10:<br /><br /> Who and What is doing What on Your System - finding open sockets,<br /> files etc.<br /><br /> $ lsof<br /> or as root<br /> $ watch lsof -i<br /><br /> To list all open Internet files, use:<br /><br /> $ lsof -i -U<br /><br /> You can also get very specific about ports. Do this as root for low<br /> ports.<br /><br /> $ lsof -i TCP:3306<br /><br /> Or, look at UDP ports as follows:<br /><br /> $ lsof -i UDP:1812<br /><br /> (See TIP 118)<br /><br /> Also try fuser. Suppose you have a mounted file-system, and you need<br /> to umount it. To list the users on the file-system /work<br /><br /> $ fuser -u /work<br /><br /> To kill all processes accessing the file system /work in any way.<br /><br /> $ fuser -km /work<br /><br /> Or better yet, maybe you want to eject a cdrom on /mnt/cdrom<br /><br /> $ fuser -km /mnt/cdrom<br /><br /><br /> If you need IO load information about your system, you can execute<br /> iostat. But note, the very first iostat gives a snapshot since<br /> the last boot. You typically want the following command, which gives<br /> you 3 outputs every 5 seconds.<br /><br /> $ iostat -xtc 5 3<br /> Linux 2.6.12-1.1376_FC3smp (squeezel.squeezel.com) 10/05/2005<br /><br /> Time: 07:05:04 PM<br /> avg-cpu: %user %nice %system %iowait %idle<br /> 0.97 0.06 1.94 0.62 96.41<br /><br /> Time: 07:05:09 PM<br /> avg-cpu: %user %nice %system %iowait %idle<br /> 0.60 0.00 1.70 0.00 97.70<br /><br /> Time: 07:05:14 PM<br /> avg-cpu: %user %nice %system %iowait %idle<br /> 1.00 0.00 1.60 0.00 97.39<br /><br /> vmstat reports memory statistics.<br /><br /><br /> $ vmstat<br /> $ ifconfig<br /> $ cat /proc/sys/vm/.. (entries under here)<br /><br /><br /> *NOTE: (TIP 77) shows sample usage of "ifconfig". Also<br /> (TIP 84) shows sample output of "$ cat /proc/cpuinfo". You can download iostat<br /> and other packages from (http://perso.wanadoo.fr/sebastien.godard/download_en.html).<br /> You also may want to look at iozone (TIP 178).<br /><br /> Also<br /><br /> $ cat /proc/meminfo<br /> $ cat /proc/stat<br /><br /> $ cat /proc/uptime<br /> 1078623.55 1048008.34 First number is the number of seconds since boot.<br /> The second number is the number of idle seconds.<br /><br /> $ cat /proc/loadavg<br /> 0.25 0.14 0.10 1/166 7778 This shows load at 1,5, and 15 minutes,<br /> a total of 1 current running process out<br /> from a total of 166. The 7778 is the last<br /> process id used.<br /> Ref: http://www.teamquest.com/resources/gunther/ldavg1.shtml<br /><br /> Or current process open file descriptors<br /><br /> $ ls -l /proc/self/fd/0<br /> lrwx------ 1 chirico chirico 64 Jun 29 13:17 0 -> /dev/pts/51<br /> lrwx------ 1 chirico chirico 64 Jun 29 13:17 1 -> /dev/pts/51<br /> lrwx------ 1 chirico chirico 64 Jun 29 13:17 2 -> /dev/pts/51<br /> lr-x------ 1 chirico chirico 64 Jun 29 13:17 3 -> /proc/26667/fd<br /><br /> So you could, $ echo "stuff" > /dev/pts/51, to get output. Note, tree is also<br /> helpful here:<br /><br /> $ tree /proc/self<br /><br /> /proc/self<br /> |-- auxv<br /> |-- cmdline<br /> |-- cwd -> /work/souptonuts/documentation/theBook<br /> |-- environ<br /> |-- exe -> /usr/bin/tree<br /> |-- fd<br /> | |-- 0 -> /dev/pts/51<br /> | |-- 1 -> /dev/pts/51<br /> | |-- 2 -> /dev/pts/51<br /> | `-- 3 -> /proc/26668/fd<br /> |-- maps<br /> |-- mem<br /> |-- mounts<br /> |-- root -> /<br /> |-- stat<br /> |-- statm<br /> |-- status<br /> |-- task<br /> | `-- 26668<br /> | |-- auxv<br /> | |-- cmdline<br /> | |-- cwd -> /work/souptonuts/documentation/theBook<br /> | |-- environ<br /> | |-- exe -> /usr/bin/tree<br /> | |-- fd<br /> | | |-- 0 -> /dev/pts/51<br /> | | |-- 1 -> /dev/pts/51<br /> | | |-- 2 -> /dev/pts/51<br /> | | `-- 3 -> /proc/26668/task/26668/fd<br /> | |-- maps<br /> | |-- mem<br /> | |-- mounts<br /> | |-- root -> /<br /> | |-- stat<br /> | |-- statm<br /> | |-- status<br /> | `-- wchan<br /> `-- wchan<br /><br /> 10 directories, 28 files<br /><br /> Need a listing of the system settings?<br /><br /> $ sysctl -a<br /><br /> Need IPC (Shared Memory Segments, Semaphore Arrays, Message Queue) status<br /> etc?<br /><br /> $ ipcs<br /> $ ipcs -l "This gives limits"<br /><br /> Need to "watch" everything a user does? The following watches donkey.<br /><br /> $ watch lsof -u donkey<br /><br /> Or, to see what in going on in directory "/work/junk"<br /><br /> $ watch lsof +D /work/junk<br /><br /><br /><br />TIP 11:<br /><br /> How to make a File "immutable" or "unalterable" -- it cannot be changed<br /> or deleted even by root. Note this works on (ext2/ext3) filesystems.<br /> And, yes, root can delete after it's changed back.<br /><br /> As root:<br /><br /> $ chattr +i filename<br /><br /> And to change it back:<br /><br /> $ chattr -i filename<br /><br /> List attributes<br /><br /> $ lsattr filename<br /><br /><br /><br />TIP 12:<br /><br /> SSH - How to Generate the Key Pair.<br /><br /><br /> On the local server<br /><br /> $ ssh-keygen -t dsa -b 2048<br /><br /> This will create the two files:<br /><br /> .ssh/id_dsa (Private key)<br /> .ssh/id_dsa.pub (Public key you can share)<br /><br /> Next insert ".ssh/id_dsa.pub" on the remote server<br /> in the file ".ssh/authorized_keys" and ".ssh/authorized_keys2"<br /> and change the permission of each file to (chmod 600). Plus, make<br /> sure the directory ".ssh" exists on the remote computer with 700 rights.<br /> Ok, assuming 192.168.1.155 is the remote server and "donkey" is the<br /> account on that remote server.<br /><br /> $ ssh donkey@192.168.1.155 "mkdir -p .ssh"<br /> $ ssh donkey@192.168.1.155 "chmod 700 .ssh"<br /> $ scp ./.ssh/id_dsa.pub donkey@192.168.1.155:.ssh/newkey.pub<br /><br /> Now connect to that remote server "192.168.1.155" and add .ssh/newkey.pub<br /> to both "authorized_keys" and "authorized_keys2". When done, the permission<br /> on<br /> (This is on the remote server)<br /><br /> $chmod 600 .ssh/authorized_key*<br /><br /> Next, go back to the local server and issue the following:<br /><br /> $ ssh-agent $SHELL<br /> $ ssh-add<br /><br /> The "ssh-add" will allow you to enter the passphrase and it will<br /> save it for the current login session.<br /><br /> You don't have to enter a password when running "ssh-keygen" above. But,<br /> remember anyone with root access can "su - <username>" and then connect<br /> to your computers. It's harder, however, not impossible, for root to do<br /> this if you have a password.<br /><br /> (Reference TIP 151)<br /><br /><br /><br />TIP 13:<br /><br /> Securing the System: Don't allow root to login remotely. Instead,<br /> the admin could login as another account, then, "su -". However,<br /> root can still login "from the local terminal".<br /><br /> In the "/etc/ssh/sshd_config" file change the following lines:<br /><br /> Protocol 2<br /> PermitRootLogin no<br /> PermitEmptyPasswords no<br /><br /> Then, restart ssh<br /><br /> /etc/init.d/sshd restart<br /><br /> Why would you want to do this? It's not possible for anyone to guess<br /> or keep trying the root account. This is especially good for computers<br /> on the Internet. So, even if the "root" passwords is known, they can't<br /> get access to the system remotely. Only from the terminal, which is locked<br /> in your computer room. However, if anyone has a account on the server,<br /> then, they can login under their account then "su -".<br /><br /> Suppose you only want a limited number of users: "mchirico" and "donkey".<br /> Add the following line to "/etc/ssh/sshd_config". Note, this allows access<br /> for chirico and donkey, but everyone else is denied.<br /><br /> # Once you add AllowUsers - everyone else is denied.<br /> AllowUsers mchirico donkey<br /><br /><br /><br />TIP 14:<br /><br /> Keep Logs Longer with Less Space.<br /><br /> Normally logs rotate monthly, over writing all the old data. Here's a<br /> sample "/etc/logrotate.conf" that will keep 12 months of backup<br /> compressing the logfiles<br /><br /> $ cat /etc/logrotate.conf<br /><br /> # see "man logrotate" for details<br /> # rotate log files weekly<br /> #chirico changes to monthly<br /> monthly<br /><br /> # keep 4 weeks worth of backlogs<br /> # keep 12 months of backup<br /> rotate 12<br /><br /> # create new (empty) log files after rotating old ones<br /> create<br /><br /> # uncomment this if you want your log files compressed<br /> compress<br /><br /> # RPM packages drop log rotation information into this directory<br /> include /etc/logrotate.d<br /><br /> # no packages own wtmp -- we'll rotate them here<br /> /var/log/wtmp {<br /> monthly<br /> create 0664 root utmp<br /> rotate 1<br /> }<br /><br /> # system-specific logs may be also be configured here.<br /><br /><br /> Note: see tip 1. The clock should always be correctly set.<br /><br /><br /><br />TIP 15:<br /><br /> What Network Services are Running?<br /><br /> $ netstat -atup<br /><br /> or<br /><br /> $ netstat -ap|grep LISTEN|less<br /><br /> This can be helpful to determine the services running.<br /><br /> Need stats on dropped UDP packets?<br /><br /> $ netstat -s -u<br /><br /> or TCP<br /><br /> $ netstat -s -t<br /><br /> or summary of everything<br /><br /> $ netstat -s<br /><br /> or looking for error rates on the interface?<br /><br /> $ netstat -i<br /><br /> Listening interfaces?<br /><br /> $ netstat -l<br /><br /> (Tip above provided by Amos Shapira)<br /><br /> Also see TIP 77.<br /><br /><br /><br />TIP 16:<br /><br /> Apache: Creating and Using an ".htaccess" File<br /><br /><br /> Below is a sample ".htaccess" file which goes in<br /> "/usr/local/apache/htdocs/chirico/alpha/.htaccess" for this<br /> example<br /><br /><br /> AuthUserFile /usr/local/apache/htdocs/chirico/alpha/.htpasswd<br /> AuthGroupFile /dev/null<br /> AuthName "Your Name and regular password required"<br /> AuthType Basic<br /><br /> <limit><br /> require valid-user<br /> </limit><br /><br /> In order for this to work /usr/local/apache/conf/httpd.conf must<br /> have the following line in it:<br /><br /><br /> #<br /> <directory><br /> AllowOverride FileInfo AuthConfig Limit<br /> Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec<br /> <limit><br /> Order allow,deny<br /> Allow from all<br /> </limit><br /> <limitexcept><br /> Order deny,allow<br /> Deny from all<br /> </limitexcept><br /> </directory><br /><br /><br /><br /> Also, a password file must be created<br /><br /> $ /usr/local/apache/bin/htpasswd -c .htpasswd chirico<br /><br /> And enter the user names and passwords.<br /><br /> Next Reload Apache:<br /><br /> $ /etc/init.d/httpd reload<br /><br /> (Reference TIP 213 limit access to certain directories based on IP address).<br /><br /><br /><br />TIP 17:<br /><br /> Working with "mt" Commands: reading and writing to tape.<br /><br /> The following assumes the tape device is "/dev/st0"<br /><br /> STEP 1 ( rewind the tape)<br /><br /> # mt -f /dev/nst0 rewind<br /><br /> STEP 2 (check to see if you are at block 0)<br /><br /> # mt -f /dev/nst0 tell<br /> At block 0.<br /><br /> STEP 3 (Backup "tar compress" directories "one" and "two")<br /><br /> # tar -czf /dev/nst0 one two<br /><br /> STEP 4 (Check to see what block you are at)<br /><br /> # mt -f /dev/nst0 tell<br /><br /> You should get something like block 2 at this point.<br /><br /> STEP 5 (Rewind the tape)<br /><br /> # mt -f /dev/nst0 rewind<br /><br /> STEP 6 (List the files)<br /><br /> # tar -tzf /dev/nst0<br /> one/<br /> one/test<br /> two/<br /><br /> STEP 7 (Restore directory "one" into directory "junk"). Note, you<br /> have to first rewind the tape, since the last operation moved<br /> ahead 2 blocks. Check this with "mt -f /dev/nst0".<br /><br /> # cd junk<br /> # mt -f /dev/nst0 rewind<br /> # mt -f /dev/nst0 tell<br /> At block 0.<br /> # tar -xzf /dev/nst0 one<br /><br /> STEP 8 (Next, take a look to see what block the tape is at)<br /><br /> # mt -f /dev/nst0 tell<br /> At block 2.<br /><br /> STEP 9 (Now backup directories three and four)<br /><br /> # tar -czf /dev/nst0 three four<br /><br /> After backing up the files, the tape should be past block 2.<br /> Check this.<br /><br /> # mt -f /dev/nst0 tell<br /> At block 4.<br /><br /> Currently the following exist:<br /><br /> At block 1:<br /> one/<br /> one/test<br /> two/<br /><br /> At block 2:<br /> three/<br /> three/samplehere<br /> four/<br /><br /> At block 4:<br /> (* This is empty *)<br /><br /> A few notes. You can set the blocking factor and a label<br /> with tar. For example:<br /><br /> $ tar --label="temp label" --create --blocking-factor=128 --file=/dev/nst0 Notes<br /><br /> But note if you try to read it with the default, incorrect blocking<br /> factor, then, you will get the following error:<br /><br /> $ tar -t --file=/dev/nst0<br /> tar: /dev/nst0: Cannot read: Cannot allocate memory<br /> tar: At beginning of tape, quitting now<br /> tar: Error is not recoverable: exiting now<br /><br /> However this is easily fixed with the correct blocking factor<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ tar -t --blocking-factor=128 --file=/dev/nst0<br /> temp label<br /> Notes<br /><br /> Take advantage of the label command.<br /><br /> $ MYCOMMENTS="Big_important_tape"<br /> $ tar --label="$(date +%F)"+"${MYCOMMENTS}"<br /><br /> Writing to tape on a remote 192.168.1.155 computer<br /><br /> $ tar cvzf - ./tmp | ssh -l chirico 192.168.1.155 '(mt -f /dev/nst0 rewind; dd of=/dev/st0 )'<br /><br /> Restoring the contents from tape on a remote computer<br /><br /> $ ssh -l chirico 192.168.1.155 '(mt -f /dev/nst0 rewind; dd if=/dev/st0 )'|tar xzf -<br /><br /> Getting data off of tape with dd command with odd blocking factor. Just set ibs very high<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ tar --label="Contenets of Notes" --create --blocking-factor=128 --file=/dev/nst0 Notes<br /> $ mt -f /dev/nst0 rewind<br /> $ dd ibs=1048576 if=/dev/st0 of=notes.tar<br /><br /> The above will probably work with ibs=64k as well<br /><br /> (Also see TIP 136)<br /><br /><br /><br />TIP 18:<br /><br /> Encrypting Data to Tape using "tar" and "openssl".<br /><br /> The following shows an example of writing the contents of "tapetest" to tape:<br /><br /> $ tar zcvf - tapetest|openssl des3 -salt -k secretpassword | dd of=/dev/st0<br /><br /> Reading the data back:<br /><br /> $ dd if=/dev/st0|openssl des3 -d -k secretpassword|tar xzf -<br /><br /><br /><br />TIP 19:<br /><br /> Mounting an ISO Image as a Filesystem -- this is great if you don't have the DVD<br /> hardware, but, need to get at the data. The following show an example of<br /> mounting the Fedora core 2 as a file.<br /><br /> $ mkdir /iso0<br /> $ mount -o loop -t iso9660 /FC2-i386-DVD.iso /iso0<br /><br /> Or to mount automatically at boot, add the following to "/etc/fstab"<br /><br /> /FC2-i386-DVD.iso /iso0 iso9660 rw,loop 0 0<br /><br /><br /> Reference: http://umn.dl.sourceforge.net/sourceforge/souptonuts/README_fedora.txt<br /><br /><br /><br />TIP 20:<br /><br /> Getting Information about the Hard drive and list all PCI devices.<br /><br /> $ hdparm /dev/hda<br /><br /> /dev/hda:<br /> multcount = 16 (on)<br /> IO_support = 0 (default 16-bit)<br /> unmaskirq = 0 (off)<br /> using_dma = 1 (on)<br /> keepsettings = 0 (off)<br /> readonly = 0 (off)<br /> readahead = 256 (on)<br /> geometry = 16383/255/63, sectors = 234375000, start = 0<br /><br /> or for SCSI<br /><br /> $ hdparm /dev/sda<br /><br /> Try it with the -i option for information<br /><br /> $ hdparm -i /dev/hda<br /><br /> /dev/hda:<br /><br /> Model=IC35L120AVV207-1, FwRev=V24OA66A, SerialNo=VNVD09G4CZ6E0T<br /> Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }<br /> RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=52<br /> BuffType=DualPortCache, BuffSize=7965kB, MaxMultSect=16, MultSect=16<br /> CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=234375000<br /> IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}<br /> PIO modes: pio0 pio1 pio2 pio3 pio4<br /> DMA modes: mdma0 mdma1 mdma2<br /> UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5<br /> AdvancedPM=yes: disabled (255) WriteCache=enabled<br /> Drive conforms to: ATA/ATAPI-6 T13 1410D revision 3a: 2 3 4 5 6<br /><br /> How fast is your drive?<br /><br /> $ hdparm -tT /dev/hda<br /><br /> /dev/hda:<br /> Timing buffer-cache reads: 128 MB in 0.41 seconds =315.32 MB/sec<br /> Timing buffered disk reads: 64 MB in 1.19 seconds = 53.65 MB/sec<br /><br /> Need to find your device?<br /><br /> $ mount<br /> or<br /> $ cat /proc/partitions<br /> or<br /> $ dmesg | egrep '^(s|h)d'<br /><br /> which for my system lists:<br /><br /> hda: IC35L120AVV207-1, ATA DISK drive<br /> hdc: Lite-On LTN486S 48x Max, ATAPI CD/DVD-ROM drive<br /> hda: max request size: 1024KiB<br /> hda: 234375000 sectors (120000 MB) w/7965KiB Cache, CHS=16383/255/63, UDMA(100)<br /><br /> By the way, if you want to turn on dma<br /><br /> $ hdparm -d1 /dev/hda<br /> setting using_dma to 1 (on)<br /> using_dma = 1 (on)<br /><br /> (Also see TIP 122 )<br /><br /> List all PCI devices<br /><br /> $ lspci -v<br /><br /> 00:00.0 Host bridge: Intel Corp. 82845G/GL [Brookdale-G] Chipset Host Bridge (rev<br /> Subsystem: Dell Computer Corporation: Unknown device 0160<br /> Flags: bus master, fast devsel, latency 0<br /> Memory at f0000000 (32-bit, prefetchable) [size=128M]<br /> Capabilities: <available><br /><br /> ... lots more ...<br /><br /> Note, there is also lspci -vv for even more information.<br /><br /> (Also see TIP 200)<br /><br />TIP 21:<br /><br /> Setting up "cron" Jobs.<br /><br /> If you want to use the emacs editor for editing cron jobs, then,<br /> set the following in your "/home/user/.bash_profile"<br /><br /> EDITOR=emacs<br /><br /> Then, to edit cron jobs<br /><br /> $ crontab -e<br /><br /> You may want to put in the following header<br /><br /> #MINUTE(0-59) HOUR(0-23) DAYOFMONTH(1-31) MONTHOFYEAR(1-12) DAYOFWEEK(0-6) Note 0=Sun and 7=Sun<br /> #<br /> #14,15 10 * * 0 /usr/bin/somecommmand >/dev/null 2>&1<br /><br /> The sample "commented out command" will run at 10:14 and 10:15 every Sunday. There will<br /> be no "mail" sent to the user because of the ">/dev/null 2>&1" entry.<br /><br /> $ crontab -l<br /><br /> The above will list all cron jobs. Or if you're root<br /><br /> $ crontab -l -u <username><br /> $ crontab -e -u <username><br /><br /> Reference "man 5 crontab":<br /><br /> The time and date fields are:<br /><br /> field allowed values<br /> ----- --------------<br /> minute 0-59<br /> hour 0-23<br /> day of month 1-31<br /> month 1-12 (or names, see below)<br /> day of week 0-7 (0 or 7 is Sun, or use names)<br /><br /> A field may be an asterisk (*), which always stands for ``first-last''.<br /><br /> Ranges of numbers are allowed. Ranges are two numbers separated with a<br /> hyphen. The specified range is inclusive. For example, 8-11 for an<br /> ``hours'' entry specifies execution at hours 8, 9, 10 and 11.<br /><br /> Lists are allowed. A list is a set of numbers (or ranges) separated by<br /> commas. Examples: ``1,2,5,9'', ``0-4,8-12''.<br /><br /> Ranges can include "steps", so "1-9/2" is the same as "1,3,5,7,9".<br /><br /> Note, you can run just every 5 minutes as follows:<br /><br /> */5 * * * * /etc/mrtg/domrtg >/dev/null 2>&1<br /><br /> To run jobs hourly,daily,weekly or monthly you can add shell scripts into the<br /> appropriate directory:<br /><br /> /etc/cron.hourly/<br /> /etc/cron.daily/<br /> /etc/cron.weekly/<br /> /etc/cron.monthly/<br /><br /> Note that the above are pre-configured schedules set in "/etc/crontab", so<br /> if you want, you can change the schedule. Below is my /etc/crontab:<br /><br /> $ cat /etc/crontab<br /> SHELL=/bin/bash<br /> PATH=/sbin:/bin:/usr/sbin:/usr/bin<br /> MAILTO=root<br /> HOME=/<br /><br /> # run-parts<br /> 01 * * * * root run-parts /etc/cron.hourly<br /> 02 4 * * * root run-parts /etc/cron.daily<br /> 22 4 * * 0 root run-parts /etc/cron.weekly<br /> 42 4 1 * * root run-parts /etc/cron.monthly<br /><br /><br /><br />TIP 22:<br /><br /> Keeping Files in Sync Between Servers.<br /><br /> The remote computer is "192.168.1.171" and has the account "donkey". You want<br /> to "keep in sync" the files under "/home/cu2000/Logs" on the remote computer<br /> with files on "/home/chirico/dev/MEDIA_Server" on the local computer.<br /><br /> $ rsync -Lae ssh donkey@192.168.1.171:/home/cu2000/Logs /home/chirico/dev/MEDIA_Server<br /><br /> "rsync" is a convient command for keeping files in sync, and as shown here will work<br /> through ssh. The -L option tells rsync to treat symbolic links like ordinary files.<br /><br /> Also see [http://www.rsnapshot.org/]<br /><br /><br /><br />TIP 23:<br /><br /> Looking up the Spelling of a Word.<br /><br /> $ look <partial><br /><br /> so the following will list all words that<br /> start with stuff<br /><br /> $ look stuff<br /> stuff<br /> stuffage<br /> stuffata<br /> stuffed<br /> stuffender<br /> stuffer<br /> stuffers<br /> stuffgownsman<br /> stuffier<br /> stuffiest<br /> stuffily<br /> stuffiness<br /> stuffinesses<br /> stuffiness's<br /> stuffing<br /> stuffings<br /> stuffing's<br /> stuffless<br /> stuffs<br /> stuffy<br /><br /> It helps to have a large "linuxwords" dictionary. You can download<br /> a much bigger dictionary from the following:<br /><br /> http://prdownloads.sourceforge.net/souptonuts/linuxwords.1.tar.gz?download<br /><br /> Note: vim users can setup the .vimrc file with the following. Now when you type<br /> CTL-X CTL-T in insert mode, you'll get a thesaurus lookup.<br /><br /> set dictionary+=/usr/share/dict/words<br /> set thesaurus+=/usr/share/dict/words<br /><br /> Or, you can call aspell with the F6 command after putting the folling entry in your<br /> .vimrc file<br /><br /> :nmap <f6> :w<cr>:!aspell -e -c %<cr>:e<cr><br /><br /> Now, hit F6 when you're in vim, and you'll get a spell checker.<br /><br /><br /> There is also an X Windows dictionary that runs with the following command.<br /><br /> $ gnome-dictionary<br /><br /><br /><br />TIP 24:<br /><br /> Find out if a Command is Aliased.<br /><br /> $ type -all <command><br /><br /> Example:<br /><br /> $ type -all ls<br /> ls is aliased to `ls --color=tty'<br /> ls is /bin/ls<br /><br /><br /><br />TIP 25:<br /><br /> Create a Terminal Calculator<br /><br /> Put the following in your .bashrc file<br /><br /> function calc<br /> {<br /> echo "${1}"|bc -l;<br /> }<br /><br /> Or, run it at the shell prompt. Now<br /> "calc" from the shell will work as follows:<br /><br /> $ calc 3+45<br /> 48<br /><br /> All functions with a "(" or ")" must be enclosed<br /> in quotes. For instance, to get the sin of .4<br /><br /> $ calc "s(.4)"<br /> .38941834230865049166<br /><br /> (See TIP 115 using the expr command)<br /><br /><br /><br />TIP 26:<br /><br /> Kill a User and All Their Current Processes.<br /><br /><br /> #!/bin/bash<br /> # This program will kill all processes from a<br /> # user. The user name is read from the command line.<br /> #<br /> # This program also demonstrates reading a bash variable<br /> # into an awk script.<br /> #<br /> # Usage: kill9user <user><br /> #<br /> kill -9 `ps aux|awk -v var=$1 '$1==var { print $2 }'`<br /><br /> or if you want want to create the above script the command<br /> below will kill the user "donkey" and all of his processes.<br /><br /> $ kill -9 `ps aux|awk -v var="donkey" '$1==var { print $2 }'`<br /><br /> Check their cron jobs and "at" jobs, if you have a security issue.<br /><br /> $ crontab -u <user> -e<br /><br /> Lock the account:<br /><br /> $ passwd -l <user><br /><br /> Remove all authorized_keys<br /><br /> $ rm /home/user/.shosts<br /> $ rm /home/user/.rhosts<br /> $ rm -rf /home/user/.ssh<br /> $ rm /home/user/.forward<br /><br /> or consider<br /><br /> $ mv /home/user /home/safeuser<br /><br /><br /> Change the shell<br /><br /> $ chsh -s /bin/true <user><br /><br /> Do an inventory<br /><br /> $ find / -user <user> > list_of_user_files<br /><br /> NOTE: Also see (TIP 10).<br /><br /> To see all users, except the current user. Do not use the<br /> dash "ps -aux" is wrong but the following is correct:<br /><br /> $ ps aux| awk '!/'${USER}'/{printf("%s \n",$0)}'<br /><br /> or (ww = wide, wide output)<br /><br /> $ ps auwwx| awk '!/'${USER}'/{printf("%s \n",$0)}'<br /><br /><br /> The following codes may be useful:<br /><br /> D Uninterruptible sleep (usually IO)<br /> R Running or runnable (on run queue)<br /> S Interruptible sleep (waiting for an event to complete)<br /> T Stopped, either by a job control signal or because it is being traced.<br /> W paging (not valid since the 2.6.xx kernel)<br /> X dead (should never be seen)<br /> Z Defunct ("zombie") process, terminated but not reaped by its parent.<br /><br /> For BSD formats and when the stat keyword is used, additional<br /> characters may be displayed:<br /><br /> < high-priority (not nice to other users)<br /> N low-priority (nice to other users)<br /> L has pages locked into memory (for real-time and custom IO)<br /> s is a session leader<br /> l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)<br /> + is in the foreground process group<br /><br /><br /> Also see TIP 28. and TIP 89.<br /><br /><br /><br />TIP 27:<br /><br /> Format Dates for Logs and Files<br /><br /> $ date "+%m%d%y %A,%B %d %Y %X"<br /> 061704 Thursday,June 17 2004 07:13:40 PM<br /><br /> $ date "+%m%d%Y"<br /> 06172004<br /><br /> $ date -d '1 day ago' "+%m%d%Y"<br /> 06162004<br /><br /> $ date -d '3 months 1 day 2 hour 15 minutes 2 seconds ago'<br /><br /> or to go into the future remove the "ago"<br /><br /> $ date -d '3 months 1 day 2 hour 15 minutes 2 seconds '<br /><br /> Also the following works:<br /><br /> $ date -d '+2 year +1 month -1 week +3 day -8 hour +2 min -5 seconds'<br /><br /> Quick question: If there are 100,000,000 stars in the visible sky, and you can<br /> count them, round the clock, at a rate of a star per second starting now, when<br /> would you finish counting? Would you still be alive?<br /><br /> $ date -d '+100000000 seconds'<br /><br /> Sooner than you think!<br /><br /> This can be assigned to variables<br /><br /> $ mdate=`date -d '3 months 1 day 2 hour 15 minutes 2 seconds ' "+%m%d%Y_%A_%B_%D_%Y_%X" `<br /> $ echo $mdate<br /> 09182004_Saturday_September_09/18/04_2004_09:40:41 PM<br /> ^---- Easy to sort ^-------^----- Easy to read<br /><br /> See TIP 28 below.<br /><br /> See TIP 87 when working with large delta time changes -40 years, or -200 years ago, or even<br /> 1,000,000 days into the future.<br /><br /> Also see (TIP 1) for working with time zones.<br /><br /><br /><br />TIP 28:<br /><br /> Need Ascii Codes? For instance, for printing quotes:<br /><br /> awk 'BEGIN { msg = "Don\047t Panic!"; printf "%s \n",msg }'<br /> or<br /> awk 'BEGIN { msg = "Don\x027t Panic!"; printf "%s \n",msg }'<br /><br /> It's better to use \047, because certain characters that follow \x027 may cause problems.<br /><br /> For example, take a look at the following two lines. The first line prints a "}" caused<br /> by the extra D in \x027D. The the line immediately below does not work as expected.<br /><br /> awk 'BEGIN {printf("The D causes problems \x027D\n")}'<br /><br /> However, the line below works fine:<br /><br /> awk 'BEGIN {printf("The D does not cause problems \047D\n")}'<br /><br /> Or if you wanted to use the date command in "awk" to print date.time.nanosecond.timezone for<br /> each line of a file "test".<br /><br /> The following date can be used in awk because the single quotes are enclosed within the<br /> double quotes.<br /><br /> date '+%m%d%Y.%H%M%S.%N%z'<br /><br /> $ awk 'BEGIN { "date '+%m%d%Y.%H%M%S.%N%z'" | getline MyDate } { print MyDate,$0 }' < data<br /><br /> But it's also possible to replace "+" with \x2B, "%" with \x25, and "d" with \x64 as follows:<br /><br /> $ awk 'BEGIN { "date \x27\x2B\x25m\x25\x64\x25Y.\x25H\x25M\x25S.\x25N\x25z\x27" | getline MyDate } { print MyDate,$0 }' < test<br /><br /> 07062004.113820.346033000-0400 bob 71<br /> 07062004.113820.346033000-0400 tom 43<br /> 07062004.113820.346033000-0400 sal 34<br /> 07062004.113820.346033000-0400 bob 89<br /> 07062004.113820.346033000-0400 tom 66<br /> 07062004.113820.346033000-0400 sal 99<br /><br /> For this example it's not needed because single quotes are used inside of double quotes; however, there may be times when<br /> hex replacement is easier.<br /><br /><br /> $ man ascii<br /><br /> Oct Dec Hex Char Oct Dec Hex Char<br /> -----------------------------------------------------------<br /> 000 0 00 NUL '\0' 100 64 40 @<br /> 001 1 01 SOH 101 65 41 A<br /> 002 2 02 STX 102 66 42 B<br /> 003 3 03 ETX 103 67 43 C<br /> 004 4 04 EOT 104 68 44 D<br /> 005 5 05 ENQ 105 69 45 E<br /> 006 6 06 ACK 106 70 46 F<br /> 007 7 07 BEL '\a' 107 71 47 G<br /> 010 8 08 BS '\b' 110 72 48 H<br /> 011 9 09 HT '\t' 111 73 49 I<br /> 012 10 0A LF '\n' 112 74 4A J<br /> 013 11 0B VT '\v' 113 75 4B K<br /> 014 12 0C FF '\f' 114 76 4C L<br /> 015 13 0D CR '\r' 115 77 4D M<br /> 016 14 0E SO 116 78 4E N<br /> 017 15 0F SI 117 79 4F O<br /> 020 16 10 DLE 120 80 50 P<br /> 021 17 11 DC1 121 81 51 Q<br /> 022 18 12 DC2 122 82 52 R<br /> 023 19 13 DC3 123 83 53 S<br /> 024 20 14 DC4 124 84 54 T<br /> 025 21 15 NAK 125 85 55 U<br /> 026 22 16 SYN 126 86 56 V<br /> 027 23 17 ETB 127 87 57 W<br /> 030 24 18 CAN 130 88 58 X<br /> 031 25 19 EM 131 89 59 Y<br /> 032 26 1A SUB 132 90 5A Z<br /> 033 27 1B ESC 133 91 5B [<br /> 034 28 1C FS 134 92 5C \ '\\'<br /> 035 29 1D GS 135 93 5D ]<br /> 036 30 1E RS 136 94 5E ^<br /> 037 31 1F US 137 95 5F _<br /> 040 32 20 SPACE 140 96 60 `<br /> 041 33 21 ! 141 97 61 a<br /> 042 34 22 " 142 98 62 b<br /> 043 35 23 # 143 99 63 c<br /> 044 36 24 $ 144 100 64 d<br /> 045 37 25 % 145 101 65 e<br /> 046 38 26 & 146 102 66 f<br /> 047 39 27 ' 147 103 67 g<br /> 050 40 28 ( 150 104 68 h<br /> 051 41 29 ) 151 105 69 i<br /> 052 42 2A * 152 106 6A j<br /> 053 43 2B + 153 107 6B k<br /> 054 44 2C , 154 108 6C l<br /> 055 45 2D - 155 109 6D m<br /> 056 46 2E . 156 110 6E n<br /> 057 47 2F / 157 111 6F o<br /> 060 48 30 0 160 112 70 p<br /> 061 49 31 1 161 113 71 q<br /> 062 50 32 2 162 114 72 r<br /> 063 51 33 3 163 115 73 s<br /> 064 52 34 4 164 116 74 t<br /> 065 53 35 5 165 117 75 u<br /> 066 54 36 6 166 118 76 v<br /> 067 55 37 7 167 119 77 w<br /> 070 56 38 8 170 120 78 x<br /> 071 57 39 9 171 121 79 y<br /> 072 58 3A : 172 122 7A z<br /> 073 59 3B ; 173 123 7B {<br /> 074 60 3C < 174 124 7C |<br /> 075 61 3D = 175 125 7D }<br /> 076 62 3E > 176 126 7E ~<br /> 077 63 3F ? 177 127 7F DEL<br /><br /><br /><br />TIP 29:<br /><br /> Need a WWW Browser for the Terminal Session? Try lynx or elinks.<br /><br /> $ lynx<br /><br /> Or to read all these tips, with the latest updates<br /><br /> $ lynx http://umn.dl.sourceforge.net/sourceforge/souptonuts/How_to_Linux_and_Open_Source.txt<br /><br /><br /> Or, better yet elinks.<br /><br /> $ elinks http://somepage.<br /><br /> You can get elinks at the following site:<br /><br /> http://elinks.or.cz/<br /><br /><br /><br />TIP 30:<br /><br /> screen - screen manager with VT100/ANSI terminal emulation<br /><br /> This is an excellent utility. But if you work a lot in Emacs,<br /> then, you should place the following in your ~./.bashrc<br /><br /> alias s='screen -e^Pa -D -R'<br /><br /> After loging in again (or source .bashrc) ,<br /> type the following to load "screen":<br /><br /> $ s<br /><br /> If you're using the not using the alias command above, substitute<br /> CTL-a for CTL-p below. :<br /><br /> CTL-p CTL-C To get a new session<br /> CTL-p " To list sessions, and arrow keys to move<br /> CTL-p SHFT-A To name sessions<br /> CTL-p S To split screens<br /> CLT-p Q To unsplit screens<br /> CLT-p TAB To switch between screens<br /> CLT-p :resize n To resize screen to n rows, on split screen<br /><br /><br /> Screen is very powerful. Should you become disconneced, you can<br /> still resume work after loggin in.<br /><br /> $ man screen<br /><br /> The above command will give you more information.<br /><br /><br /><br />TIP 31:<br /><br /> Need to Find the Factors of a Number?<br /><br /> $ factor 2345678992<br /> 2345678992: 2 2 2 2 6581 22277<br /><br /> It's a quick way to find out if a number is prime<br /><br /> $ factor 7867<br /> 7867: 7867<br /><br /><br /><br />TIP 32:<br /><br /> Less is More -- piping to less to scroll backword and forward<br /><br /> For large "ls" listings try the followin, then, use the arrow key<br /> to move up and down the list.<br /><br /> $ ls /some_large_dir/ | less<br /><br /> or<br /><br /> $ cat some_large_file | less<br /><br /> or<br /><br /> $ less some_large_file<br /><br /><br /><br />TIP 33:<br /><br /> C "indent" Settings for Kernel Development<br /><br /> $ indent -kr -i8 program.c<br /><br /><br /><br />TIP 34:<br /><br /> FTP auto-login. "ftp" to a site and have the password stored.<br /><br /> For instance, here's a sample ".net" file in a user's home<br /> directory for uploading to sourceforge. Note, sourceforge will<br /> take any password, so m@temp.com is used here for login "anonymous".<br /><br /> $ cat ~/.netrc<br /> machine upload.sourceforge.net login anonymous password m@temp.com<br /> default login anonymous password user@site<br /><br /> It might be a good idea to change the rights on this file<br /><br /> $ chmod 0400 ~/.netrc<br /><br /><br /> #!/bin/bash<br /> #<br /> # Sample ftp automated script to download<br /> # file to ${dwnld}<br /> #<br /> dwnld="/work/faq/unix-faq"<br /> cd ${dwnld}<br /> ftp << FTPSTRING<br /> prompt off<br /> open rtfm.mit.edu<br /> cd /pub/usenet-by-group/news.answers/unix-faq/faq<br /> mget contents<br /> mget diff<br /> mget part*<br /> bye<br /> FTPSTRING<br /><br /> Sourceforge uses an anonymous login with an email address as<br /> a password. Below is the automated script I use for uploading<br /> binary files.<br /><br /> #!/bin/bash<br /> # ftp sourceforge auto upload ftpup.sh<br /> # Usage: ./ftpup.sh <filename><br /> #<br /> # machine upload.sourceforge.net user anonymous m@aol.com<br /> ftp -n -u << FTPSTRING<br /> open upload.sourceforge.net<br /> user anonymous m@aol.com<br /> binary<br /> cd incoming<br /> put ${1}<br /> bye<br /> FTPSTRING<br /><br /> (Also see TIP 114 for ncftpget, which is a very powerful restarting<br /> ftp program)<br /><br /><br /><br />TIP 35:<br /><br /> Bash Brace Expansion<br /><br /> $ echo f{ee,ie,oe,um}<br /> fee fie foe fum<br /><br /> This works with almost any command<br /><br /> $ mkdir -p /work/junk/{one,two,three,four}<br /><br /><br /><br />TIP 36:<br /><br /> Getting a List of User Accounts on the System<br /><br /> $ cut -d: -f1 /etc/passwd | sort<br /><br /><br /><br />TIP 37:<br /><br /> Editing a Bash Command<br /><br /> Try typing a long command say, then, type "fc" for an easy way<br /> to edit the command.<br /><br /> $ find /etc -iname '*.cnf' -exec grep -H 'log' {} \;<br /> $ fc<br /><br /> "fc" will bring the last command typed into an editor, "emacs" if<br /> that's the default editor. Type "fc -l" to list last few commands.<br /><br /> To seach for a command, try typing "CTL-r" at the shell prompt for<br /> searching. "CTL-t" to transpose, say "sl" was typed by you want "ls".<br /><br /><br /><br /> Hints when using "fc: in emacs:<br /><br /> ESC-b move one word backward<br /> ESC-f move one word forward<br /> ESC-DEL kill one word backward<br /> CTL-k kill point to end<br /> CTL-y un-yank killed region at point<br /><br /><br /><br />TIP 38:<br /><br /> Moving around Directories.<br /><br /> Change to the home directory:<br /> $ cd ~<br /> or<br /> $ cd<br /><br /> To go back to the last directory<br /> $ cd -<br /><br /> Instead of "cd" to a directory try "pushd" and look<br /> at the heading...you can see a list of directories.<br /><br /> $ pushd /etc<br /> $ pushd /usr/local<br /><br /> Then, to get back "popd" or "popd 1"<br /><br /> To list all the directories pushed on the stack<br /> use the "dirs -v" command.<br /><br /> $ dirs -v<br /> 0 /usr/local<br /> 1 /etc<br /> 2 /work/souptonuts/documentation/theBook<br /><br /> Now, if you "pushd +1" you will be moved to "/etc", since<br /> is number "1" on the stack, and this directory will become<br /> "0".<br /><br /> $ pwd<br /> /usr/local<br /> $ pushd +1<br /> $ pwd<br /> /etc<br /><br /> $ dirs -v<br /> 0 /etc<br /> 1 /work/souptonuts/documentation/theBook<br /> 2 /usr/local<br /><br /><br /><br />TIP 39:<br /><br /> Need an Underscore after a Variable?<br /><br /> Enclose the variable in "{}".<br /><br /> $echo ${UID}_<br /><br /> Compare to<br /><br /> $echo $UID_<br /><br /> Also try the following:<br /><br /><br /> $ m="my stuff here"<br /> $ echo -e ${m// /'\n'}<br /> my<br /> stuff<br /> here<br /><br /><br /><br />TIP 40:<br /><br /> Bash Variable Offset and String Operators<br /><br /> $ r="this is stuff"<br /> $ echo ${r:3}<br /> $ echo ${r:5:2}<br /><br /> Note, ${varname:offset:length}<br /><br /><br /> ${varname:?message} If varname exist and isn't null return value, else,<br /> print var and message.<br /><br /> $ r="new stuff"<br /> $ echo ${r:? "that's r for you"}<br /> new stuff<br /> $ unset r<br /> $ echo ${r:? "that's r for you"}<br /> bash: r: that's r for you<br /><br /> ${varname:+word} If varname exist and not null return word. Else, return null.<br /><br /> ${varname:-word} If varname exist and not null return value. Else, return word.<br /><br /> Working with arrays in bash - bash arrays.<br /><br /> $ unset p<br /> $ p=(one two three)<br /><br /> $ echo -e "${p[@]}"<br /> one two three<br /><br /> or<br /><br /> $ echo -e "${p[*]}"<br /> one two three<br /><br /> $ echo -e "${#p[@]}"<br /> 3<br /><br /> $ echo -e "${p[0]}"<br /> one<br /><br /> $ echo -e "${p[1]}"<br /> two<br /><br /> Also see (TIP 95)<br /><br /><br /><br />TIP 41:<br /><br /> Loops in Bash<br /><br /><br /> The command below loops through directories listed in $PATH.<br /><br /> $ path=$PATH:<br /> $ while [ $path ]; do echo " ${path%%:*} "; path=${path#*:}; done<br /><br /> The command below will also loop through directories in your path.<br /><br /> $IFS=:<br /> $ for dir in $PATH<br /> > do<br /> > ls -ld $dir<br /> > done<br /> drwxr-xr-x 2 root root 4096 Jun 10 20:16 /usr/local/bin<br /> drwxr-xr-x 2 root root 4096 Jun 13 23:12 /bin<br /> drwxr-xr-x 3 root root 40960 Jun 12 08:00 /usr/bin<br /> drwxr-xr-x 2 root root 4096 Feb 14 03:12 /usr/X11R6/bin<br /> drwxrwxr-x 2 chirico chirico 4096 Jun 6 13:06 /home/chirico/bin<br /><br /> Other ways of doing loops:<br /><br /> for (( i=1; i <= 20; i++))<br /> do<br /> echo -n "$i "<br /> done<br /><br /> Note, to do it all on one line, do the following:<br /><br /> $ for (( i=1; i <= 20; i++)); do echo -n "$i"; done<br /><br /> Below, is an example of declaring i an integer so that you do not<br /> have to preface with let.<br /><br /> $ declare -i i<br /> $ i=5;<br /> $ while (( $i > 1 )); do<br /> > i=i-1<br /> > echo $i<br /> > done<br /> 4<br /> 3<br /> 2<br /><br /> You can also use "while [ $i -gt 1 ]; do" in place of "while (( $i > 1 )); do"<br /><br /> To get a listing of all declared values<br /><br /> $ declare -i<br /><br /><br /> Try putting a few words in the file "test"<br /><br /> $ while read filename; do echo "- $filename "; done < test |nl -w1<br /><br /> Or, using an array<br /><br /> declare -a Array<br /> Array[0]="zero"<br /> Array[1]="one"<br /> Array[2]="two"<br /> for i in `seq ${#Array[@]}`<br /> do<br /> echo $Array[$i-1]<br /> done<br /><br /> Also see (TIP 95 and TIP 133).<br /><br /><br /><br />TIP 42:<br /><br /> "diff" and "patch".<br /><br /> You have created a program "prog.c", saved as this name and also copied<br /> to "prog.c.old". You post "prog.c" to users. Next, you make changes<br /> to prog.c<br /><br /> $ diff -c prog.c.old prog.c > prog.patch<br /><br /> Now, users can get the latest updates by running.<br /><br /> $ patch < prog.patch<br /><br /> By the way, you can make backups of your data easily.<br /><br /> $ cp /etc/fstab{,.bak}<br /><br /> Now, you do your edits to "/etc/fstab" and if you need<br /> to go back to the original, you can find it at<br /> "/etc/fstab.bak".<br /><br /><br /><br />TIP 43:<br /><br /> "cat" the Contents of Files Listed in a File, in That Order.<br /><br /> SETUP (Assume you have the following)<br /><br /> $ cat file_of_files<br /> file1<br /> file2<br /><br /> $ cat file1<br /> This is the data in file1<br /><br /> $ cat file 2<br /> This is the data in file2<br /><br /> So there are 3 files here "file_of_files" which contains the name of<br /> other files. In this case "file1" and "file2". And the contents of<br /> "file1" and "file2" is shown above.<br /><br /> $ cat file_of_files|xargs cat<br /> This is the data in file1<br /> This is the data in file2<br /><br /> Also see (TIP 44, TIP 62 and TIP 235).<br /><br /><br /><br />TIP 44:<br /><br /> Columns and Rows -- getting anything you want.<br /><br /> Assume you have the following file.<br /><br /> $ cat data<br /> 1 2 3<br /> 4 5<br /> 6 7 8 9 10<br /> 11 12<br /> 13 14<br /><br /> How to you get everything in 2 columns?<br /><br /> $ cat data|tr ' ' '\n'|xargs -l2<br /> 1 2<br /> 3 4<br /> 5 6<br /> 7 8<br /> 9 10<br /> 11 12<br /> 13 14<br /><br /> Three columns?<br /><br /> $ cat data|tr ' ' '\n'|xargs -l3<br /> 1 2 3<br /> 4 5 6<br /> 7 8 9<br /> 10 11 12<br /> 13 14<br /><br /> What's the row sum of the "three columns?"<br /><br /> $ cat data|tr ' ' '\n'|xargs -l3|tr ' ' '+'|bc<br /> 6<br /> 15<br /> 24<br /> 33<br /> 27<br /><br /> or<br /><br /> $ tr ' ' '\n' < data |xargs -l3|tr ' ' '+'|bc<br /><br /> NOTE "Steven Heiner's rule":<br /><br /> cat one_file | program<br /><br /> can always be rewritten as<br /><br /> program < one_file<br /><br /> Note: thanks to Steven Heiner (http://www.shelldorado.com/) the above can be<br /> shortened as follows:<br /><br /> $ tr ' ' '\n' < data|xargs -l3|tr ' ' '+'|bc<br /><br /> Need to "tr" from the stdin?<br /><br /> $ tr "xy" "yx"| ... | ...<br /><br /> But there is a the "Stephane CHAZELAS" condition here<br /><br /> "Note that tr, sed, and awk mail fail on files containing '\0'<br /> sed and awk have unspecified behaviors if the input<br /> doesn't end in a '\n' (or to sum up, cat works for<br /> binary and text files, text utilities such as sed or awk<br /> work only for text files).<br /><br /><br /><br />TIP 45:<br /><br /> Auto Directory Spelling Corrections.<br /><br /> To turn this on:<br /><br /> $ shopt -s cdspell<br /><br /> Now mispell a directory in the cd command.<br /><br /> $ cd /usk/local<br /> ^-------- still gets you to --<br /> |<br /> /usr/local<br /><br /> What other options can you set? The following will list<br /> all the options:<br /><br /> $ shopt -p<br /><br /><br /><br />TIP 46:<br /><br /> Record Eveything Printed on Your Terminal Screen.<br /><br /> $ script -a <filename><br /><br /> Now start doing stuff and "everything" is appended to <filename>.<br /> For example<br /><br /> $ script installation<br /><br /> $ (command)<br /><br /> $ (result)<br /><br /> $ ...<br /><br /> $ ...<br /><br /> $ (command)<br /><br /> $ (result)<br /><br /> $ exit<br /><br /> The whole session log is in the installation file that you can later<br /> read and/or cleanup and add to a documentation.<br /><br /> This command can also be used to redirect the contents to another user,<br /> but you must be root to do this.<br /><br /> Step 1 - find out what pts they are using.<br /><br /> $ w<br /><br /> Step 2 - Run script on that pts. After running this command below<br /> everything you type will appear on their screen.<br /><br /> $ script /dev/pts/4<br /><br /><br /> Thanks to Jacques.GARNIER-EXTERIEUR@EU.RHODIA.COM for his contribution<br /> to this tip.<br /><br /> Also reference TIP 208.<br /><br /><br /><br />TIP 47:<br /><br /> Monitor all Network Traffic Except Your Current ssh Connection.<br /><br /> $ tcpdump -i eth0 -nN -vvv -xX -s 1500 port not 22<br /><br /> Or to filter out port 123 as well getting the full length of the packet<br /> (-s 0), use the following:<br /><br /> $ tcpdump -i eth0 -nN -vvv -xX -s 0 port not 22 and port not 123<br /><br /> Or to filter only a certain host say 81.169.158.205<br /><br /> $ tcpdump -i eth0 -nN -vvv -xX port not 22 and host 81.169.158.205<br /><br /> Just want ip addresses and a little bit of data, then,<br /> use this. The "-c 20" is to stop after 20 packets.<br /><br /> $ tcpdump -i eth0 -nN -s 1500 port not 22 -c 20<br /><br /> If you're looking for sign of DOS attacks, the following show just the SYN<br /> packets on all interfaces:<br /><br /> $ tcpdump 'tcp[13] & 2 == 2'<br /><br /><br /><br />TIP 48:<br /><br /> Where are the GNU Reference Manuals?<br /><br /> http://www.gnu.org/manual/manual.html<br /><br /> Also worth a look the "Linux Documentation Project"<br /><br /> http://en.tldp.org/<br /><br /> and Red Hat manuals<br /><br /> http://www.redhat.com/docs/manuals/enterprise/<br /><br /><br /><br />TIP 49:<br /><br /> Setting or Changing the Library Path.<br /><br /> The following contains the settings to be added or deleted<br /><br /> /etc/ld.so.conf<br /><br /> After this file is edited, you must run the following:<br /><br /> $ ldconfig<br /><br /> See "man ldconfig" for more information.<br /><br /><br /><br />TIP 50:<br /><br /> Working with Libraries in C<br /><br /> Assume the following 3 programs:<br /><br /> $ cat ./src/test.c<br /><br /> int test(int t)<br /> {<br /> printf("%d\n",t);<br /> return t;<br /> }<br /><br /><br /> $ cat ./src/prog1.c<br /><br /> /*<br /> program: prog1.c<br /> dependences: test.c<br /><br /> compiling this program:<br /> gcc -o prog test.c prog1.c<br /><br /> Note the libpersonal include<br /> should be remove if NOT using the<br /> library<br /> */<br /><br /> #include <libpersonal.h><br /> #include <stdio.h><br /> int<br /> main(int argc, char **argv)<br /> {<br /> test(45);<br /> }<br /><br /> $ cat ./include/libpersonal.h<br /><br /> extern int test(int);<br /><br /><br /> Prog1.c needs the test function in test.c<br /> To compile, so that both programs work together, do the following:<br /><br /> $ cd src<br /> $ gcc -o prog test.c prog1.c -I../include<br /><br /> However, if you want to create your own static library, then, run the following:<br /><br /> $ mkdir -p ../lib<br /> $ gcc -c test.c -o ../lib/test.o<br /> $ cd ../lib<br /> $ ar r libpersonal.a test.o<br /> $ ranlib libpersonal.a<br /><br /> or, the ar and ranlib command can be combined as follows:<br /><br /> $ ar rs libpersonal.a test.o<br /><br /> To compile the program with the static library:<br /><br /> $ cd ../src<br /> $ gcc -I../include -L../lib -o prog1 prog1.c -lpersonal<br /><br /><br /> The -I../include tells gcc to look in the ../include directory for<br /> libpersonal.h. and -L../lib, tells gcc to look for the<br /> "libpersonal.a" library.<br /><br /> $ cd ..<br /> $ tree src lib include<br /> src<br /> |-- prog<br /> |-- prog1<br /> |-- prog1.c<br /> `-- test.c<br /> lib<br /> |-- libpersonal.a<br /> `-- test.o<br /> include<br /> `-- libpersonal.h<br /><br /> This was a STATIC library. Often times you will want to use a SHARED<br /> or dynamic library.<br /><br /> SHARED LIBRARY:<br /><br /> You must recompile test.c with -fpic option.<br /><br /> $ cd ../lib<br /> $ gcc -c -fpic ../src/test.c -o test.o<br /><br /> Next create the libpersonal.so file.<br /><br /> $ gcc -shared -o libpersonal.so test.o<br /><br /> Now, compile the source prog1.c as follows:<br /><br /> $ cd ../src<br /> $ gcc -Wl,-R../lib -L../lib -I../include -o prog2 prog1.c -lpersonal<br /><br /> This should work fine. But, take a look at prog2 using the ldd command.<br /><br /> $ ldd prog2<br /><br /> libpersonal.so => ../lib/libpersonal.so (0x40017000)<br /> libc.so.6 => /lib/tls/libc.so.6 (0x42000000)<br /> /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)<br /><br /> If you move the program prog2 to a different location, it will not run.<br /> Instead you will get the following error:<br /><br /> prog2: error while loading shared libraries: libpersonal.so:<br /> cannot open shared object file: No such file or directory<br /><br /> To fix this, you should specify the direct path to the library. And in my<br /> case it is rather long<br /><br /> $ gcc -Wl,-R/work/souptonuts/documentation/theBook/lib -L../lib -I../include -o prog2 prog1.c -lpersonal<br /><br /> SPECIAL NOTE: The -R must always follow the -Wl. (-Wl,-R<directory>) They always go together<br /><br /><br /><br />TIP 51:<br /><br /> Actively Monitor a File and Send Email when Expression Occurs.<br /><br /> This is a way to monitor "/var/log/messages" or any file for certain changes.<br /> The example below actively monitors "stuff" for the work "now" and as soon as<br /> "now" is added to the file, the contents of msg are sent to the user<br /> mikechirico@hotmail.com<br /><br /> $ tail -f stuff | \<br /> awk ' /now/ { system("mail -s \"This is working\" mikechirico@hotmail.com < msg") }'<br /><br /> Or, you can run a program, say get headings on slashdot from the program "getslash.php" which<br /> runs on "192.168.1.155" with account "chirico". Assuming you have ssh keys setup, then, the following<br /> will send mail from the output:<br /><br /> $ ssh chirico@192.168.1.155 "./bin/getslash.php"|mail -s "Slash cron Headlines" mchirico@comcast.net<br /><br /> See (TIP 80) for scraping the headings on slash dot and how to get a copy of "getslash.php". If you still<br /> want to use awk:<br /><br /> $ ssh chirico@192.168.1.155 "./bin/getslash.php"| \<br /> awk '{ print $0 | "mail -s \x27 Slash Topics \x27 mchirico@comcast.net "}'<br /><br /> Note the "\x27" is a quote. Maybe you only want articles dealing with "Linux":<br /><br /> $ ssh chirico@192.168.1.155 "./bin/getslash.php"| \<br /> awk '/Linux/{ print $0 | "mail -s \x27 Slash Topics \x27 mchirico@comcast.net "}'<br /><br /> For $60, you can get a numeric display from "delcom engineering" that you can send messages and<br /> data to. I get weather information off the internet and send it to this device.<br /><br /> http://sourceforge.net/projects/delcom/<br /><br /> (Reference TIP 151 for ssh tips)<br /><br /><br /><br />TIP 52:<br /><br /> Need to Keep Secrets? Encrypt it.<br /><br /> To Encrypt:<br /><br /> $ openssl des3 -salt -in file.txt -out file.des3<br /><br /> The above will prompt for a password, or you can put it in<br /> with a -k option, assuming you're on a trusted server.<br /><br /> To Decrypt<br /><br /> $ openssl des3 -d -salt -in file.des3 -out file.txt -k mypassword<br /><br /> Need to encrypt what you type? Enter the following, then start typing<br /> and ^D to end.<br /><br /> $ openssl des3 -salt -out stuff.txt<br /><br /><br /><br />TIP 53:<br /><br /> Check that a File has Not Been Tampered With: Use Cryptographic Hashing Function.<br /><br /> The md5sum is popular but dated<br /><br /> $ md5sum file<br /><br /> Instead, use one of the following;<br /><br /> $ openssl dgst -sha1 -c file<br /><br /> $ openssl dgst -ripemd160 -c file<br /><br /> All calls give a fixed length string or "message digest".<br /><br /><br /><br />TIP 54:<br /><br /> Need to View Information About a Secure Web Server? A SSL/TLS test.<br /><br /> $ openssl s_client -connect www.sourceforge.net:443<br /><br /> Above will give a long listing of certificates.<br /><br /> Note, it is also possible to get certificate information about a mail server<br /><br /> $ openssl s_client -connect mail.comcast.net:995 -showcerts<br /><br /> When you do the above command you get two certificates. If you copy<br /> past both certificates by taking the following contents include the<br /> begin and end show below:<br /><br /> -----BEGIN CERTIFICATE-----<br /> ....<br /> -----END CERTIFICATE-----<br /><br /> Then create files "comcast0.pem" and "comcast1.pem" out of these certificaties and<br /> put them in a directory "/home/donkey/.certs", then, with the openssl src package, in<br /> the "./tools/c_rehash" run<br /><br /> $ c_rehash .certs<br /> Doing .certs<br /> comcast0.pem => 72f90dc0.0<br /> comcast1.pem => f73e89fd.0<br /><br /> Now it's possible to have fetchmail work with these certs.<br /><br /> #<br /> #<br /> # Sample .fetchmailrc file for Comcast<br /> #<br /> # Check mail every 90 seconds<br /> set daemon 90<br /> set syslog<br /> set postmaster donkey<br /> #set bouncemail<br /> #<br /> # Comcast email is zdonkey but computer account is just donkey<br /> #<br /> poll mail.comcast.net with proto POP3 and options no dns<br /> user 'zdonkey' with pass "somethin35" is 'donkey' here options ssl sslcertck sslcertpath '/home/donkey/.certs'<br /> smtphost comcast.net<br /> # currently not used<br /> mda '/usr/bin/procmail -d %T'<br /><br /><br /> REFERENCE: http://www.openssl.org/<br /> http://www.catb.org/~esr/fetchmail/fetchmail-6.2.5.tar.gz<br /> http://www.madboa.com/geek/openssl/<br /><br /><br /><br /><br />TIP 55:<br /><br /> cp --parents. What does this option do?<br /><br /> Assume you have the following directory structure<br /><br /><br /> .<br /> |-- a<br /> | `-- b<br /> | |-- c<br /> | | `-- d<br /> | | |-- file1<br /> | | `-- file2<br /> | `-- x<br /> | `-- y<br /> | `-- file3<br /> `-- newdir<br /><br /><br /> Issue the following command:<br /><br /> $ cp --parents ./a/b/c/d/* ./newdir/<br /><br /> Now you have the following:<br /><br /> .<br /> |-- a<br /> | `-- b<br /> | |-- c<br /> | | `-- d<br /> | | |-- file1<br /> | | `-- file2<br /> | `-- x<br /> | `-- y<br /> | `-- file3<br /> `-- newdir<br /> `-- a<br /> `-- b<br /> `-- c<br /> `-- d<br /> |-- file1<br /> `-- file2<br /><br /> Note that you can't do this with "cp -r" because you'd pickup<br /> the x directory and its contents.<br /><br /> You probably want to use the "cp --parents" command for directory<br /> surgery, which you need to be very specific on what you cut and<br /> copy.<br /><br /><br /><br />TIP 56:<br /><br /> Quickly Locating files.<br /><br /> The "locate" command quickly searches the indexed database for files. It just<br /> gives the name of the files; but, if you need more information use it as follows<br /><br /> $ locate document|xargs ls -l<br /><br /> The "locate" database may only get updated every 24 hours. For more recent finds,<br /> use the "find" command.<br /><br /><br /><br />TIP 57:<br /><br /> Using the "find" Command.<br /><br /> List only directories, max 2 nodes down that have "net" in the name<br /><br /> $ find /proc -type d -maxdepth 2 -iname '*net*'<br /><br /> Find all *.c and *.h files starting from the current "." position.<br /><br /> $ find . \( -iname '*.c' -o -iname '*.h' \) -print<br /><br /> Find all, but skip what's in "/CVS" and "/junk". Start from "/work"<br /><br /><br /> $ find /work \( -iregex '.*/CVS' -o -iregex '.*/junk' \) -prune -o -print<br /><br /> Note -regex and -iregex work on the directory as well, which means<br /> you must consider the "./" that comes before all listings.<br /><br /> Here is another example. Find all files except what is under the CVS, including<br /> CVS listings. Also exclude "#" and "~".<br /><br /> $ find . -regex '.*' ! \( -regex '.*CVS.*' -o -regex '.*[#|~].*' \)<br /><br /> Find a *.c file, then run grep on it looking for "stdio.h"<br /><br /> $ find . -iname '*.c' -exec grep -H 'stdio.h' {} \;<br /> sample output --> ./prog1.c:#include <stdio.h><br /> ./test.c:#include <stdio.h><br /><br /> Looking for the disk-hog on the whole system?<br /><br /> $ find / -size +10000k 2>/dev/null<br /><br /> Looking for files changed in the last 24 hours? Make sure you add the<br /> minus sign "-1", otherwise, you will only find files changed exactly<br /> 24 hours from now. With the "-1" you get files changed from now to 24<br /> hours.<br /><br /><br /> $ find . -ctime -1 -printf "%a %f\n"<br /> Wed Oct 6 12:51:56 2004 .<br /> Wed Oct 6 12:35:16 2004 How_to_Linux_and_Open_Source.txt<br /><br /> Or if you just want files.<br /><br /> $ find . -type f -ctime -1 -printf "%a %f\n"<br /><br /> Details on file status change in the last 48 hours, current directory. Also note "-atime -2").<br /><br /> $ find . -ctime -2 -type f -exec ls -l {} \;<br /><br /> NOTE: if you don't use -type f, you make get "." returned, which<br /> when run through ls "ls ." may list more than what you want.<br /><br /> Also you may only want the current directory<br /><br /> $ find . -ctime -2 -type f -maxdepth 1 -exec ls -l {} \;<br /><br /> To find files modified within the last 5 to 10 minutes<br /><br /> $ find . -mmin +5 -mmin -10<br /><br /><br /> For more example "find" commands, reference the following looking<br /> for the latest version of "bashscripts.x.x.x.tar.gz":<br /><br /> http://sourceforge.net/project/showfiles.php?group_id=79320&package_id=80711<br /><br /> See "TIP 71" for examples of find using the inode feature. " $ find . -inum <inode> -exec rm -- '{}' \; "<br /><br /> If you don't want error messages, or need to redirect error messages "> /dev/null 2>&1", or see<br /> "TIP 81".<br /><br /><br /><br />TIP 58:<br /><br /> Using the "rm" command.<br /><br /> How do you remove a file that has the name "-". For instance, if you run the command<br /> "$ cat > - " and type some text followed by ^d, how does the "-" file get deleted?<br /><br /> $ rm -- -<br /><br /> The "--" nullifies any rm options.<br /><br /> How do you delete the directory "one", all it's sub-directories, and any data?<br /><br /> $ rm -rf ./one<br /><br /> Note, to selectively delete stuff on a directory, use the find command "TIP 57".<br /> To delete by inode, see "TIP 71".<br /><br /><br /><br />TIP 59:<br /><br /> Giving ownership.<br /><br /> How do you give the user "donkey" ownership to all directories and files under<br /> "./fordonkey" ?<br /><br /> $ chown -R donkey ./fordonkey<br /><br /><br /><br />TIP 60:<br /><br /> Only Permit root login -- give others a message when they try to login.<br /><br /> Create the file "/etc/nologin" with "nologin" containing the contents<br /> of the message.<br /><br /><br /><br />TIP 61:<br /><br /> Limits: file size, open files, pipe size, stack size, max memory size<br /> cpu time, plus others.<br /><br /> To get a listing of current limits:<br /><br /> $ ulimit -a<br /> core file size (blocks, -c) 0<br /> data seg size (kbytes, -d) unlimited<br /> file size (blocks, -f) unlimited<br /> max locked memory (kbytes, -l) unlimited<br /> max memory size (kbytes, -m) unlimited<br /> open files (-n) 1024<br /> pipe size (512 bytes, -p) 8<br /> stack size (kbytes, -s) 8192<br /> cpu time (seconds, -t) unlimited<br /> max user processes (-u) 8179<br /> virtual memory (kbytes, -v) unlimited<br /><br /> Note as a user you can decrease your limits in the current<br /> shell session; but, you cannot increase. This can be ideal<br /> for testing programs. But, first you may want to create<br /> another shell "sh" so that you can "go back to where started".<br /><br /> $ ulimit -f 10<br /><br /> Now try<br /><br /> $ yes >> out<br /> File size limit exceeded<br /><br /> To set limits on users, make changes to "/etc/security/limits.conf"<br /><br /> bozo - maxlogins 1<br /><br /> Will keep bozo from loging in more than once.<br /><br /> To list hard limits:<br /><br /> $ ulimit -Ha<br /><br /> To list soft limits:<br /><br /> $ ulimit -Sa<br /><br /> To restrict user access by time, day make changes to<br /> "/etc/security/time.conf"<br /><br /> Also take a look at "/etc/profile" to see what other changes<br /> can be made, plus take a look under "/etc/security/*.conf" for<br /> other configuration files.<br /><br /><br /><br />TIP 62:<br /><br /> Stupid "cat" Tricks.<br /><br /> Also see (TIP 43 and TIP 235).<br /><br /> If you have multiple blank lines that you want to squeeze down to<br /> one line, then, try the following:<br /><br /> $ cat -s <file><br /><br /> Want to number the lines?<br /><br /> $ cat -n <file><br /><br /> Want to show tabs?<br /><br /> $ cat -t <file><br /><br /> Need to mark end of lines by "$"? The following was suggested by (Amos Shapira)<br /><br /> $ cat -e <file><br /><br /> Want to see all the ctl characters?<br /><br /> /* ctlgen.c<br /> Program to generate ctl characters.<br /><br /> Compile:<br /><br /> gcc -o ctlgen ctlgen.c<br /><br /> Run:<br /><br /> ./ctlgen > mout<br /><br /> Now see the characters:<br /><br /> cat -v mout<br /><br /> Here's a sample output:<br /><br /><br /> $ cat -v mout|tail<br /> test M-v<br /> test M-w<br /> test M-x<br /> test M-y<br /> test M-z<br /> test M-{<br /> test M-|<br /> test M-}<br /> test M-~<br /> test M-^?<br /><br /> */<br /> #include <stdlib.h><br /> #include <stdio.h><br /> int main()<br /> {<br /> int i;<br /><br /> for(i=0; i < 256; ++i)<br /> printf("test %c \n",i);<br /><br /> return 0;<br /> }<br /><br /><br /><br />TIP 63:<br /><br /> Guard against SYN attacks and "ping".<br /><br /> As root do the following:<br /><br /> echo 1 > /proc/sys/net/ipv4/tcp_syncookies<br /><br /> Want to disable "ping" ?<br /><br /> echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all<br /><br /> Disable broadcast/multicast "ping" ?<br /><br /> echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts<br /><br /> And to enable again:<br /><br /> echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all<br /><br /><br /><br />TIP 64:<br /><br /> Make changes to .bash_profile and need to update the current session?<br /><br /> $ source .bash_profile<br /><br /> With the above command, the user does not have to logout.<br /><br /><br /><br />TIP 65:<br /><br /> What are the Special Shell Variables?<br /><br /><br /> $# The number of arguments.<br /> $@ All arguments, as separate words.<br /> $* All arguments, as one word.<br /> $$ ID of the current process.<br /> $? Exit status of the last command.<br /> $0,$1,..$9,${10},${11}...${N} Positional parameters. After "9" you must use the ${k} syntax.<br /><br /> Note that 0 is true. For example if you execute the following, which is true you get zero.<br /><br /> $ [[ -f /etc/passwd ]]<br /> $ echo $?<br /> 0<br /> And the following is false, which returns a 1.<br /><br /> $ [[ -f /etc/passwdjabberwisnohere ]]<br /> $ echo $?<br /> 1<br /><br /> So true=0 and false=1.<br /><br /><br /> Sample program "mdo" to show the difference between "$@" and "$*"<br /><br /> #!/bin/bash<br /> function myarg<br /> {<br /> echo "$# in myarg function"<br /> }<br /> echo -e "$# parameters on the cmd line\n"<br /> echo -e "calling: myarg \"\$@\" and myarg \"\$*\"\n"<br /> myarg "$@"<br /> myarg "$*"<br /> echo -e "\ncalling: myarg \$@ and myarg \$* without quotes\n"<br /> myarg $@<br /> myarg $*<br /><br /><br /> The result of running "./mdo one two". Note that when quoted, myarg "$*",<br /> returns 1 ... all parameters are smushed together as one word.<br /><br /> [chirico@third-fl-71 theBook]$ ./mdo one two<br /> 2 parameters on the cmd line<br /><br /> calling: myarg "$@" and myarg "$*"<br /><br /> 2 in myarg function<br /> 1 in myarg function<br /><br /> calling: myarg $@ and myarg $* without quotes<br /><br /> 2 in myarg function<br /> 2 in myarg function<br /><br /> Example program "mdo2" shows how the input separator can be changed.<br /><br /> #!/bin/bash<br /> IFS=|<br /> echo -e "$*\n"<br /> IFS=,<br /> echo -e "$*\n"<br /> IFS=\;<br /> echo -e "$*\n"<br /> IFS=$1<br /> echo -e "$*\n"<br /><br /> [chirico@third-fl-71 theBook]$ ./mdo2 one two three four five<br /> one two three four five<br /><br /> one,two,three,four,five<br /><br /> one;two;three;four;five<br /><br /> oneotwoothreeofourofive<br /><br /><br /><br />TIP 66:<br /><br /> Replace all "x" with "y" and all "y" with "x" in file data.<br /><br /> $ cata data<br /> x y<br /> y x<br /><br /> $ tr "xy" "yx" < data<br /> y x<br /> x y<br /><br /><br /><br />TIP 67:<br /><br /> On a Linux 2.6.x Kernel, how do you directly measure disk activity,<br /> and where is this information documented?<br /><br /> o The information is documented in the kernel source<br /> ./Documentation/iostats.txt<br /><br /> o The new way of getting this info in 2.6.x is<br /> $ cat /sys/block/hda/stat<br /> 151121 5694 1932358 796675 37867 76770 916994 8353762 0 800672 9150437<br /><br /> Field 1 -- # of reads issued<br /> This is the total number of reads completed successfully.<br /> Field 2 -- # of reads merged, field 6 -- # of writes merged<br /> Reads and writes which are adjacent to each other may be merged for<br /> efficiency. Thus two 4K reads may become one 8K read before it is<br /> ultimately handed to the disk, and so it will be counted (and queued)<br /> as only one I/O. This field lets you know how often this was done.<br /> Field 3 -- # of sectors read<br /> This is the total number of sectors read successfully.<br /> Field 4 -- # of milliseconds spent reading<br /> This is the total number of milliseconds spent by all reads (as<br /> measured from __make_request() to end_that_request_last()).<br /> Field 5 -- # of writes completed<br /> This is the total number of writes completed successfully.<br /> Field 7 -- # of sectors written<br /> This is the total number of sectors written successfully.<br /> Field 8 -- # of milliseconds spent writing<br /> This is the total number of milliseconds spent by all writes (as<br /> measured from __make_request() to end_that_request_last()).<br /> Field 9 -- # of I/Os currently in progress<br /> The only field that should go to zero. Incremented as requests are<br /> given to appropriate request_queue_t and decremented as they finish.<br /> Field 10 -- # of milliseconds spent doing I/Os<br /> This field is increases so long as field 9 is nonzero.<br /> Field 11 -- weighted # of milliseconds spent doing I/Os<br /> This field is incremented at each I/O start, I/O completion, I/O<br /> merge, or read of these stats by the number of I/Os in progress<br /> (field 9) times the number of milliseconds spent doing I/O since the<br /> last update of this field. This can provide an easy measure of both<br /> I/O completion time and the backlog that may be accumulating.<br /><br /> Note, this is device specific.<br /><br /><br /><br />TIP 68:<br /><br /> Passing Outbound Mail, plus Masquerading User and Hostname.<br /><br /> Here's a specific example:<br /><br /> How does one send and receive Comcast email from a home Linux box,<br /> which uses Comcast as the ISP, if the local account on the Linux<br /> box is different from the Comcast email. For instance, the<br /> account on the Linux box is "chirico@third-fl-71" and the Comcast<br /> email account is "mchirico@comcast.net". Note both the hostname and<br /> username are different.<br /><br /> So, the user "chirico" using "mutt", "elm" or any email program would<br /> like to send out email to say "donkey@comcast.net"; yet, donkey would<br /> see the email from "mchirico@comcast.net" and not "chirico@third-fl-71"<br /> but chirico@third-fl-71 would get the replies.<br /><br /> For a full description of how to solve this problem, including related<br /> "sendmail.mc", "site.config.m4", "genericstable", "genericsdomain",<br /> ".procmailrc", and ".forward" files, reference the following:<br /><br /> http://prdownloads.sourceforge.net/souptonuts/README_COMCAST_EMAIL.txt?download<br /><br /> Included in the above link are instructions for building sendmail with<br /> "SASL" and "STARTTLS".<br /><br /><br /><br />TIP 69:<br /><br /> How do you remove just the last 2 lines from a file and save the result?<br /><br /> $ sed '$d' file | sed '$d' > savefile<br /><br /> Or, as Amos Shapira pointed out, it's much easier with the head command.<br /><br /> $ head -2 file<br /><br /> And, of course, removing just the last line<br /><br /> $ sed '$d' file > savefile<br /><br /> (See REFERENCES (13))<br /><br /> How do you remove extra spaces at the end of a line?<br /><br /> $ sed 's/[ ]*$//g'<br /><br /> How do you remove blank lines, or lines with just spaces and tabs,<br /> saving the origional file as file.backup?<br /><br /> $ perl -pi.backup -e "s/^(\s)*\n//" file<br /><br /> Or, you may want to remove empty spaces and tabs at the end of a line<br /><br /> $ perl -pi.backup -e "s/(\s)*\n/\n/" file<br /><br /> Or, you may want to converts dates of the format 01/23/2007 to the<br /> format 2007-01-23. This is MySQL's common date format.<br /><br /> $ perl -pi.backup -e "s|(\d+)/(\d+)/(\d+)|\$3-\$2-\$1|" file<br /><br /> Note, you need a backslash \$3,\$2,\$1 so as to not get bash shell<br /> substitution.<br /><br /><br /><br />TIP 70:<br /><br /> Generating Random Numbers.<br /><br /> $ od -vAn -N4 -tu4 < /dev/urandom<br /> 3905158199<br /><br /><br /><br />TIP 71:<br /><br /> Deleting a File by it's Inode Value.<br /><br /> See (PROGRAMMING TIP 5) for creating the file, or<br /><br /> $ cat > '\n\n\n\n\n\n\n'<br /> type some text<br /> ^D<br /><br /> To list the inode and display the characters.<br /><br /> $ ls -libt *<br /><br /> To remove by inode. Note the "--" option. This<br /> will keep any special characters in the file from being<br /> interpreted at "rm" options.<br /><br /> $ find . -inum <inode> -exec rm -- '{}' \;<br /><br /> Or to check contents<br /><br /> $ find . -inum <inode> -exec cat '{}' \;<br /><br /> Reference:<br /> http://www.faqs.org/ftp/usenet/news.answers/unix-faq/faq/part2<br /><br /><br /><br />TIP 72:<br /><br /> Sending Attachments Using Mutt -- On the Command Line.<br /><br /> $ mutt -s "See Attachment" -a file.doc user@domain.net < message.txt<br /><br /> or just the message:<br /><br /> $ echo | mutt -a sample.tar.gz user@domain.net<br /><br /> Reference:<br /> http://www.shelldorado.com/articles/mailattachments.html<br /><br /> Also see (TIP 51).<br /><br /><br /><br />TIP 73:<br /><br /> Want to find out what functions a program calls?<br /><br /> $ strace <program><br /><br /> Try this with "topen.c" (see PROGRAMMING TIP 5)<br /><br /> $ strace ./topen<br /><br /><br /><br />TIP 74:<br /><br /> RPM Usage Summary.<br /><br /> Install. Full filename is needed.<br /><br /> $ rpm -ivh Fedora/RPMS/postgresql-libs-7.4.2-1.i386.rpm<br /><br /> To view list of files installed with a particular package.<br /><br /> $ rpm -ql postgresql-libs<br /> /usr/lib/libecpg.so.4<br /> /usr/lib/libecpg.so.4.1<br /> /usr/lib/libecpg_compat.so.1<br /> /usr/lib/libecpg_compat.so.1.1<br /> /usr/lib/libpgtypes.so.1<br /> ...<br /><br /> Or, to get the file listing from a package that is not installed use the<br /> "-p" option.<br /><br /> $ rpm -pql /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm<br /> /usr/share/doc/libpcap-0.8.3/CHANGES<br /> /usr/share/doc/libpcap-0.8.3/LICENSE<br /> /usr/share/doc/libpcap-0.8.3/README<br /> /usr/share/man/man3/pcap.3.gz<br /><br /> For dependencies listing, use the "R" option.<br /><br /> $ rpm -qpR /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm<br /> /sbin/ldconfig<br /> /sbin/ldconfig<br /> kernel >= 2.2.0<br /> libc.so.6<br /> libc.so.6(GLIBC_2.0)<br /> libc.so.6(GLIBC_2.1)<br /> libc.so.6(GLIBC_2.1.3)<br /> libc.so.6(GLIBC_2.3)<br /> openssl<br /> rpmlib(CompressedFileNames) <= 3.0.4-1<br /> rpmlib(PayloadFilesHavePrefix) <= 4.0-1<br /><br /> To check the integrity, use the "-K" option.<br /><br /> $ rpm -K /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm<br /> /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm: (sha1) dsa sha1 md5 gpg OK<br /><br /> To list all packages installed.<br /><br /> $ rpm -qa<br /><br /> To find out which file a package belongs to.<br /><br /> $ rpm -qf /usr/lib/libecpg.so.4.1<br /><br /> To uninstall a package<br /><br /> $ rpm -e<br /><br /> For building rpm packages reference the following:<br /> http://www-106.ibm.com/developerworks/library/l-rpm1/<br /><br /> To verify md5 sum so that you know it downloaded ok<br /><br /> $ rpm -K *.rpm<br /><br /> The following is a good reference:<br /> http://www.rpm.org/max-rpm/s1-rpm-install-additional-options.html<br /><br /><br /><br />TIP 75:<br /><br /> Listing Output from a Bash Script.<br /><br /> Add "set -x"<br /><br /> #!/bin/bash<br /> set -x<br /> ls<br /> date<br /><br /> Will list the files and output as follows:<br /><br /> + ls<br /> ChangeLog CVS data test<br /> + date<br /> Thu Jul 1 20:41:04 EDT 2004<br /><br /><br /><br />TIP 76:<br /><br /> Using wget.<br /><br /> Grap a webpage and pipe it to less. For example suppose you wanted to pipe the<br /> contents of all these tips, directly from the web.<br /><br /> $ wget -O - http://prdownloads.sourceforge.net/souptonuts/How_to_Linux_and_Open_Source.txt?download|less<br /><br /><br /><br />TIP 77:<br /><br /> Finding IP address and MAC address.<br /><br /> $ /sbin/ifconfig<br /><br /> Note the following output "eth0" and "eth0:1" which means<br /> two IP addresses are tied to 1 NIC (Network Interface Card).<br /><br /> eth0 Link encap:Ethernet HWaddr 00:50:DA:60:5B:AD<br /> inet addr:192.168.1.155 Bcast:192.168.99.255 Mask:255.255.252.0<br /> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br /> RX packets:982757 errors:116 dropped:0 overruns:0 frame:116<br /> TX packets:439297 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:1000<br /> RX bytes:693529078 (661.4 Mb) TX bytes:78400296 (74.7 Mb)<br /> Interrupt:10 Base address:0xa800<br /><br /> eth0:1 Link encap:Ethernet HWaddr 00:50:DA:60:5B:AD<br /> inet addr:192.168.1.182 Bcast:192.168.3.255 Mask:255.255.252.0<br /> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br /> RX packets:982757 errors:116 dropped:0 overruns:0 frame:116<br /> TX packets:439299 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:1000<br /> RX bytes:693529078 (661.4 Mb) TX bytes:78400636 (74.7 Mb)<br /> Interrupt:10 Base address:0xa800<br /><br /> lo Link encap:Local Loopback<br /> inet addr:127.0.0.1 Mask:255.0.0.0<br /> UP LOOPBACK RUNNING MTU:16436 Metric:1<br /> RX packets:785 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:785 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:0<br /> RX bytes:2372833 (2.2 Mb) TX bytes:2372833 (2.2 Mb)<br /><br /><br /><br />TIP 78:<br /><br /> DOS to UNIX and UNIX to DOS.<br /><br /> $ dos2unix file.txt<br /><br /> And to go the other way from UNIX to DOS<br /><br /> $ unix2dos unixfile<br /><br /> See the man page, since there are MAC options.<br /><br /><br /> NOTE: If you're working file DOS files, you'll probably want to use<br /> "zip" instead of "gzip" so users on Windows can unzip them.<br /><br /> $ zip test.zip test.txt<br /><br /><br /><br />TIP 79:<br /><br /> Need to Run Interactive Commands? Try "expect".<br /> http://expect.nist.gov/expect.tar.gz<br /><br /> This simple example waits for the input "hi", in some form before<br /> returning, immediately, "hello there!". Otherwise, it will wait for<br /> 60 seconds, then, return "hello there!".<br /><br /> #!/usr/bin/expect<br /> set timeout 60<br /> expect "hi\n"<br /> send "hello there!\n"<br /><br /><br /> Reference:<br /> http://www.oreilly.com/catalog/expect/chapter/ch03.html<br /><br /> http://www.cotse.com/dlf/man/expect/bulletproof1.htm<br /><br /><br /><br />TIP 80:<br /><br /> Using PHP as a Command Line Scripting Language.<br /><br /> The following will grab the complete file from slashdot.<br /><br /> #!/usr/bin/php -q<br /><br /> <?php<br /> $fileName = "http://slashdot.org/slashdot.xml";<br /> $rss = file($fileName) or die ("Cannot open file $fileName\n");<br /> for ($index=0; $index < count($rss); $index++)<br /> {<br /> echo $rss[$index];<br /> }<br /> ?><br /><br /> Note, if you want an example that parses the XML of<br /> slashdot, then, download the following:<br /><br /> http://prdownloads.sourceforge.net/souptonuts/php_scripts.tar.gz?download<br /><br /><br /><br />TIP 81:<br /><br /> Discarding all output -- including stderr messages.<br /><br /> $ ls > /dev/null 2>&1<br /><br /> Or sending all output to a file<br /><br /> $ someprog > /tmp/file 2>&1<br /><br /> Sometimes, find displays a lot of errors when searching through<br /> directories that the user doesn't have access to. To discard<br /> error messages "stderr", which is normally file descripter "2"<br /> work the following:<br /><br /> $ find / -iname 'stuff' 2>/dev/null<br /><br /> or to pipe results elsewhere<br /><br /> $ find / -iname 'stuff' > /tmp/results_of_find 2>/dev/null<br /><br /> Also see (TIP 118).<br /><br /><br /><br />TIP 82:<br /><br /> Using MIX. D. Knuth's assembly language/machine-code instruction set used in<br /> his books to illustrate his algorithms.<br /><br /> Download the source:<br /><br /> http://sourceforge.net/project/showfiles.php?group_id=13897<br /><br /> $ ./configure<br /> $ make<br /> $ make install<br /><br /> Documentation can be found at the following link. The link on<br /> sourceforge is not correct, but, the one below works.<br /><br /> http://www.gnu.org/software/mdk/manual/<br /><br /><br /><br />TIP 83:<br /><br /> Gnuplot [ http://sourceforge.net/projects/gnuplot/ ].<br /><br /> This software is ideal for printing graphs.<br /><br /> gnuplot> set term png<br /> gnuplot> set output 'testcos.png'<br /> gnuplot> plot cos(x)*sin(x)<br /> gnuplot> exit<br /><br /> Or the following command can be put into "file"<br /><br /> $ cat > file<br /> set term png<br /> set output 'testcos.png'<br /> plot cos(x)*sin(x)<br /> exit<br /> ^D<br /><br /> Then, run as follows:<br /><br /> $ gnuplot file<br /><br /> Or, suppose you have the following file "/home/chirico/data". Comments<br /> with "#" are not read by gnuplot.<br /><br /> # File /home/chirico/data<br /> #<br /> 2005-07-26 1 2.3 3<br /> 2005-07-27 2 3.4 5<br /> 2005-07-28 3 4 6.6<br /> 2005-07-29 4 6 2.5<br /><br /> And you have the following new "file"<br /><br /> set term png<br /> set xdata time<br /> set timefmt "%Y-%m-%d "<br /> set format x "%Y/%m/%d"<br /> set output '/var/www/html/chirico/gnuplot/data.png'<br /> plot '/home/chirico/data' using 1:2 w linespoints title '1st col', \<br /> '/home/chirico/data' using 1:3 w linespoints title '2nd col', \<br /> '/home/chirico/data' using 1:4 w linespoints title '3rd col'<br /> exit<br /><br /> You can now get a graph of this data running the following:<br /><br /> $ gnuplot file<br /><br /><br /><br />TIP 84:<br /><br /> CPU Information - speed, processor, cache.<br /><br /> $ cat /proc/cpuinfo<br /><br /> processor : 0<br /> vendor_id : GenuineIntel<br /> cpu family : 15<br /> model : 2<br /> model name : Intel(R) Pentium(R) 4 CPU 2.20GHz<br /> stepping : 9<br /> cpu MHz : 2193.221<br /> cache size : 512 KB<br /> fdiv_bug : no<br /> hlt_bug : no<br /> f00f_bug : no<br /> coma_bug : no<br /> fpu : yes<br /> fpu_exception : yes<br /> cpuid level : 2<br /> wp : yes<br /> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr<br /> bogomips : 4325.37<br /><br /> "bogomips" is a rough but good way to quickly compare two computer speeds. True it's a<br /> bogus reading; but, a "good enough" for government work calculation. See (TIP 10) for<br /> "vmstat" and "iostat".<br /><br /><br /><br />TIP 85:<br /><br /> POVRAY - Making Animated GIFs<br /><br /> To see this in action, reference:<br /> http://souptonuts.sourceforge.net/povray/orbit.pov.html<br /><br /> These are the basic command to create:<br /><br /><br /> $ povray orbit.ini -Iorbit.pov<br /> $ convert -delay 20 *.ppm orbit.gif<br /><br /> By the way, convert is a program from imagemagick, and it can<br /> be downloaded from ( http://www.imagemagick.org ).<br /><br /> The following is "orbit.pov"<br /><br /><br /> #include "colors.inc"<br /> #include "finish.inc"<br /> #include "metals.inc"<br /> #include "textures.inc"<br /> #include "stones.inc"<br /> #include "skies.inc"<br /><br /> camera {<br /> location <><br /> look_at <><br /> focal_point <0,><br /> blur_samples 20<br /> }<br /><br /> light_source {<br /> <><br /> color White<br /> area_light <2,0,0>,<0,0,2>, 2, 2<br /> adaptive 1<br /> fade_distance 8<br /> fade_power 1<br /> }<br /><br /> sky_sphere {<br /> S_Cloud3<br /> }<br /><br /> plane { <0,>, -1<br /> texture {<br /> pigment {<br /> checker color Blue, color White<br /> }<br /> finish {Phong_Glossy}<br /> }<br /> }<br /> #declare ball0=<br /> sphere {<br /> <0.5,>, 1<br /> texture {<br /> T_Silver_1E<br /> pigment {Yellow}<br /> }<br /> }<br /><br /> #declare ball1=<br /> sphere {<br /> <3,>, 0.5<br /> texture {<br /> T_Silver_1E<br /> pigment {Blue}<br /> }<br /> }<br /><br /> #declare ball2=<br /> sphere {<br /> <3,>, 1<br /> texture {<br /> T_Silver_1E<br /> pigment {Green}<br /> }<br /> }<br /><br /> object {ball0 rotate 360*clock*y}<br /> object {ball1 rotate 720*clock*y}<br /> object {ball2 rotate 360*(1 - clock)*y}<br /><br /><br /> And, "orbit.ini" follows:<br /><br /> Output_File_Type=P<br /><br /> Width=320<br /> Height=240<br /><br /> Initial_Frame=1<br /> Final_Frame=10<br /> Antialias=true<br /><br /> Subset_Start_Frame=1<br /> Subset_End_Frame=10<br /><br /> Cyclic_Animation=on<br /><br /><br /><br />TIP 86:<br /><br /> GPG -- GnuPG<br /><br /> Reference: http://www.gnupg.org/documentation/faqs.html<br /> http://codesorcery.net/mutt/mutt-gnupg-howto<br /> http://www.gnupg.org/(en)/download/index.html<br /> (SCRIPT 4) on following link:<br /> http://prdownloads.sourceforge.net/souptonuts/README_common_script_commands.html?download<br /><br /><br /> Generage key:<br /><br /> $ gpg --gen-key<br /><br /> Generate public key ID and fingerprint<br /><br /> $ gpg --fingerprint<br /><br /> Get a list of keys:<br /><br /> $ gpg --list-keys<br /><br /> pub 1024D/A11C1499 2004-07-15 Mike Chirico <mchirico@comcast.net><br /> sub 1024g/E1A3C2B3 2004-07-15<br /><br /> Encrypt<br /><br /> $ gpg -r Mike --encrypt sample.txt<br /><br /> This will produce "sample.txt.asc", which is a binary file. Note, I can use "Mike" because that's the<br /> name on the list of keys. Again, it will be a binary file.<br /><br /> Encrypt using "ASCII-armored text" (--armor), which is probably what you want when sending "in" the body of an<br /> email, or some document.<br /><br /> $ gpg -r Mike --encrypt --armor sample.txt<br /> or<br /> $ gpg -r Mike -e -a sample.txt<br /> or<br /> $ gpg --output somefile.asc --armor -r Mike --encrypt --armor sample.txt<br /><br /> The above 3 statements will still produce "sample.txt.asc", but look at it, or "$ cat sample.txt.asc" without<br /> fear, since there are no binary characters. Yes, you could even compile a program "$ g++ -o test test.c" , then,<br /> "$ gpg --output test.asc -r Mike --encrypt --armor test". However, when decrypting make sure to pipe<br /> the results.<br /><br /> $ gpg --decrypt test.asc > test<br /><br /> Export "public" key:<br /><br /> $ gpg --armor --export Mike > m1.asc<br /><br /> Signing the file "message.txt":<br /><br /> $ gpg --clearsign message.txt<br /><br /><br /> Sending the key to the "key-server"<br /><br /> First, list the keys.<br /><br /> $ gpg --list-keys<br /> /home/chirico/.gnupg/pubring.gpg<br /> v------------------ Use this with "0x" in front -------<br /> pub 1024D/A11C1499 2004-07-15 Mike Chirico <mchirico@comcast.net> |<br /> sub 1024g/E1A3C2B3 2004-07-15 |<br /> |<br /> v----------------------------------------------------<br /> $ gpg --send-keys 0xA11C1499<br /><br /> The above sends it to the keyserver defined in "/home/chirico/.gnupg/gpg.conf". Other key servers:<br /><br /> wwwkeys.pgp.net<br /> search.keyserver.net<br /> pgp.ai.mit.edu<br /><br /> When you go to your user-group meetings, you need to bring 2 forms of ID, and<br /> list your Key fingerprint. Shown below is the command for getting this fingerprint.<br /><br /> $ gpg --fingerprint mchirico@comcast.net<br /> pub 1024D/A11C1499 2004-07-15<br /> Key fingerprint = 9D7F C80D BB7B 4BAB CCA4 1BE9 9056 5BEC A11C 1499<br /> uid Mike Chirico (http://souptonuts.sourceforge.net/chirico/index.php) <mchirico@comcast.net><br /> sub 1024g/E1A3C2B3 2004-07-15<br /><br /><br /> Receving keys:<br /><br /> The following will retrieve my mchirico@comcast.net key<br /><br /> $ gpg --recv-keys 0xA11C1499<br /><br /><br /> Special Note: If you get the following error "GPG: Warning: Using Insecure Memory" , then,<br /> " chmod 4755 /path/to/gpg" to setuid(root) permissioins on the gpg binary.<br /><br /><br /> NOTE: If using mutt, just before sending with the "y" option, hit "p" to sign or encrypt.<br /><br /> It's possible to create a gpg/pgp email from the command line. For a tutorial on this,<br /> reference (SCRIPT 4) at the following link:<br /> http://prdownloads.sourceforge.net/souptonuts/README_common_script_commands.html?download<br /><br /><br /><br />TIP 87:<br /><br /> Working with Dates: Steffen Beyer has developed a Perl and C module for working with dates<br /><br /> This softare can be downloaded from the following location:<br /> http://www.engelschall.com/u/sb/download/pkg/Date-Calc-5.3.tar.gz<br /><br /> $ wget http://www.engelschall.com/u/sb/download/pkg/Date-Calc-5.3.tar.gz<br /> $ tar -xzvf Date-Calc-5.3.tar.gz<br /> $ cd Date-Calc-5.3<br /> $ cp ./examples/cal.c .<br /> $ gcc cal.c DateCalc.c -o mcal<br /><br /> The file cal.c contains sample function calls from DateCalc.c. Note, "DateCalc.c"<br /> is just a list of functions and includes for "DateCalc.h" and "ToolBox.h".<br /><br /> Or, and this may be easier, just download the following:<br /> http://prdownloads.sourceforge.net/cpearls/date_calc.tar.gz?download<br /><br /> The above link contains a few examples.<br /><br /><br /><br />TIP 88:<br /><br /> Color patterns for mutt.<br /><br /> The colors can be changed in the /home/user/.muttrc file. The first field begins with<br /> color, the second field is the foreground color, and the third field is the background<br /> color, or default.<br /><br /> An example .muttrc for colors:<br /><br /> # color patterns for mutt<br /> color normal white black # normal text<br /> color indicator black yellow # actual message<br /> color tree brightmagenta default # thread arrows<br /> color status brightyellow default # status line<br /> color error brightred default # errors<br /> color message magenta default # info messages<br /> color signature magenta default # signature<br /> color attachment brightyellow red # MIME attachments<br /> color search brightyellow red # search matches<br /> color tilde brightmagenta default # ~ at bottom of msg<br /> color markers red default # + at beginning of wrapped lines<br /> color hdrdefault cyan default # default header lines<br /> color bold red default # hiliting bold patterns in body<br /> color underline green default # hiliting underlined patterns in body<br /> color quoted cyan default # quoted text<br /> color quoted1 magenta default<br /> color quoted2 red default<br /> color quoted3 green default<br /> color quoted4 magenta default<br /> color quoted5 cyan default<br /> color quoted6 magenta default<br /> color quoted7 red default<br /> color quoted8 green default<br /> color quoted9 cyan default<br /> color body cyan default "((ftp|http|https)://|news:)[^ >)\"\t]+"<br /> color body cyan default "[-a-z_0-9.+]+@[-a-z_0-9.]+"<br /> color body red default "(^| )\\*[-a-z0-9*]+\\*[,.?]?[ \n]"<br /> color body green default "(^| )_[-a-z0-9_]+_[,.?]?[\n]"<br /> color body red default "(^| )\\*[-a-z0-9*]+\\*[,.?]?[ \n]"<br /> color body green default "(^| )_[-a-z0-9_]+_[,.?]?[ \n]"<br /> color index cyan default ~F # Flagged<br /> color index red default ~N # New<br /> color index magenta default ~T # Tagged<br /> color index cyan default ~D # Deleted<br /><br /><br /> Also see (TIP 190)<br /><br /><br /><br />TIP 89:<br /><br /> ps command in detail<br /><br /><br /> Here are the possible codes when using state "$ ps -e -o state,cmd"<br /><br /><br /> PROCESS STATE CODES<br /> D uninterruptible sleep (usually IO)<br /> R runnable (on run queue)<br /> S sleeping<br /> T traced or stopped<br /> Z a defunct ("zombie") process<br /><br /> < high-priority (not nice to other users)<br /> N low-priority (nice to other users)<br /> L has pages locked into memory (for real-time and custom IO)<br /> s is a session leader<br /> l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)<br /> + is in the foreground process group<br /><br /> For instance:<br /><br /> Note that the -o is for user defined, and -e is for select<br /> all process.<br /><br /> $ ps -e -o pid,state,start,time,etime,cmd<br /><br /> ...<br /> 9946 S 15:40:45 00:00:00 02:23:29 /bin/bash -i<br /> 9985 T 15:41:24 00:00:01 02:22:50 emacs mout2<br /> 10003 T 15:43:59 00:00:00 02:20:15 emacs NOTES<br /> 10320 T 17:38:42 00:00:00 25:32 emacs stuff.c<br /> ...<br /><br /> You may want to command below, without the -e, which will give the<br /> process only under the current terminal.<br /><br /> $ ps -o pid,state,start,time,etime,cmd<br /><br /> Want to find what 's impacting your load?<br /><br /> $ ps -e -o %cpu,pid,state,start,time,etime,%cpu,%mem,cmd|sort -rn|less<br /><br /><br /><br /> $ ps aux<br /><br /> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND<br /> root 1 0.0 0.0 1380 480 ? S Aug04 0:00 init [3]<br /> root 2 0.0 0.0 0 0 ? SWN Aug04 0:00 [ksoftirqd/0]<br /> root 3 0.0 0.0 0 0 ? SW< Aug04 0:00 [events/0]<br /> root 4 0.0 0.0 0 0 ? SW< Aug04 0:00 [khelper]<br /> ...<br /><br /> Or, if you want to see the environment add the -e option<br /><br /> $ ps aeux<br /><br /> ...<br /> chirico 2735 0.0 0.1 4400 1492 pts/0 S Aug04 0:00 -bash USER=chirico LOGNAME=chirico HOME=/home/chirico PATH=/usr/<br /> chirico 2771 0.0 0.0 4328 924 pts/0 S Aug04 0:00 screen -e^Pa -D -R HOSTNAME=third-fl-71.localdomain TERM=xterm S<br /> chirico 2772 0.0 0.6 9476 6352 ? S Aug04 0:54 SCREEN -e^Pa -D -R HOSTNAME=third-fl-71.localdomain TERM=xterm S<br /> chirico 2773 0.0 0.1 4432 1548 pts/1 S Aug04 0:10 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre<br /> chirico 2797 0.0 0.1 4416 1496 pts/2 S Aug04 0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre<br /> root 2821 0.0 0.0 4100 952 pts/2 S Aug04 0:00 su -<br /> root 2822 0.0 0.1 4384 1480 pts/2 S Aug04 0:00 -bash<br /> chirico 2862 0.0 0.1 4428 1524 pts/3 S Aug04 0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre<br /> sporkey 2946 0.0 0.2 6836 2960 ? S Aug04 0:15 fetchmail<br /> chirico 2952 0.0 0.1 4436 1552 pts/5 S Aug04 0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre<br /> chirico 3880 0.0 0.1 4416 1496 pts/6 S Aug05 0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre<br /> root 3904 0.0 0.0 4100 956 pts/6 S Aug05 0:00 su - donkey<br /> donkey 3905 0.0 0.1 4336 1452 pts/6 S Aug05 0:00 -bash<br /> donkey 3938 0.0 0.2 6732 2856 ? S Aug05 0:14 fetchmail<br /> chirico 3944 0.0 0.1 4416 1496 pts/7 S Aug05 0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre<br /> ...<br /><br /> There is also a -f "forrest" option. Also note below " -bash" is the start of a login shell.<br /><br /> $ ps aeuxwwf<br /> <br /> The ww option above gives a wide format with all variables. Use the above command if you plan<br /> to parse through a Perl script. Otherwise, it may be easier to do a quick read using the command<br /> below, without "ww". <br /><br /> $ ps aeuxf<br /><br /> ...<br /> root 2339 0.0 0.1 3512 1444 ? S Dec01 0:00 /usr/sbin/sshd<br /> root 25651 0.0 0.1 6764 1980 ? S Dec23 0:00 \_ /usr/sbin/sshd<br /> chirico 25653 0.0 0.2 6840 2236 ? S Dec23 0:14 \_ /usr/sbin/sshd<br /> chirico 25654 0.0 0.1 4364 1440 pts/4 S Dec23 0:00 \_ -bash USER=chirico LOGNAME=chirico HOME=/home/chirico<br /> chirico 25690 0.0 0.0 4328 920 pts/4 S Dec23 0:00 \_ screen -e^Pa -D -R HOSTNAME=third-fl-71.localdomain TERM=xterm<br /> root 2355 0.0 0.0 2068 904 ? S Dec01 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid<br /> ...<br /><br /> It is also possible to list the process by command line. For example, the following command will only list the emacs<br /> processes.<br /><br /> $ ps -fC emacs<br /> UID PID PPID C STIME TTY TIME CMD<br /> chirico 5049 5020 0 May11 pts/13 00:00:00 emacs -nw Notes<br /> chirico 12368 5104 0 May12 pts/18 00:00:00 emacs -nw dnotify.c<br /> chirico 19792 18028 0 May13 pts/20 00:00:00 emacs -nw hello.c<br /> chirico 14034 27367 0 18:52 pts/8 00:00:00 emacs -nw How_to_Linux_and_Open_Source.txt<br /><br /> You may also want to consider using top in batch mode. Here the "-n 1" means refresh once,<br /> and the "b" is for batch. The "fmt -s" is to put it in a more readable format.<br /><br /> $ top -n 1 b |fmt -s >>statfile<br /><br /><br /><br /><br />TIP 90:<br /><br /> Learning Assembly.<br /><br /> Once you have written the source, assuming the file is "exit.s", it can be compiled as follows:<br /><br /> $ as exit.s -o exit.o<br /> $ ld exit.o -o exit<br /><br /><br /> Here is the program:<br /><br /> #<br /> #INPUT: none<br /> #<br /> #OUTPUT: returns a status code. This can be viewed<br /> # by typing<br /> #<br /> # echo $?<br /> #<br /> # after running the program<br /> #<br /> #VARIABLES:<br /> # %eax holds the system call number<br /> # (this is always the case)<br /> #<br /> # %ebx holds the return status<br /> #<br /> .section .data<br /> .section .text<br /><br /> .globl _start<br /> _start:<br /> movl $1, %eax # this is the linux kernel command<br /> # number (system call) for exiting<br /> # a program<br /> movl $0, %ebx # this is the status number we will<br /> # return to the operating system.<br /> # Change this around and it will<br /> # return different things to<br /> # echo $?<br /> int $0x80 # this wakes up the kernel to run<br /> # the exit command<br /><br /> After running this program, you can get the exit code.<br /><br /> $ exit $?<br /> 0<br /><br /> That is about all it does; but, get the book for more details. The<br /> book is free.<br /><br /> http://savannah.nongnu.org/download/pgubook/<br /><br /><br /><br />TIP 91:<br /><br /> Creating a sandbox for reiserfstune,debugreiserfs and ACL. Also see TIP 4.<br /><br /> Assume you have a reisers files system created from a disk file, which<br /> means you have done something like the following:<br /><br /> # dd if=/dev/zero of=disk-rfs count=102400<br /> # losetup /dev/loop4 ./disk-rfs<br /> # mkfs -t reiserfs /dev/loop4<br /> # mkdir /fs2<br /> # mount -o loop,acl ./disk-rfs /fs2<br /><br /> Now, you can run reiserfstune. But, first you will need to umount fs2<br /><br /> # umount /fs2<br /> # reiserfstune ./disk-rfs<br /><br /> Or you can run the debug command<br /><br /> # debugreiserfs -J ./disk-rfs<br /><br /> Now, suppose you run through a lot of the debug options on<br /> http://www.namesys.com/ and you destroy this file.<br /><br /> You can recreate the file and delete the loop device.<br /><br /> # dd if=/dev/zero of=disk-rfs count=102400<br /> # losetup -d /dev/loop4<br /> # mount -o loop,acl ./disk-rfs /fs2<br /><br /> Now, try working with some of the ACL options - you can only do this<br /> with the latest kernel and tools -- Fedora Core 2 will work.<br /><br /> Assume you have 3 users, donkey, chirico and bozo2. You can give<br /> everyone rights to this file system as follows:<br /><br /> # setfacl -R -m d:u:donkey:rwx,d:u:chirico:rwx,d:u:bozo2:rwx /fs2<br /><br /><br /><br />TIP 92:<br /><br /> SpamAssassin - Setup.<br /><br /> Step 1.<br /><br /> Installing the SpamAssassin CPAN utility. You will need to do this<br /> as root.<br /> <br /> $ su -<br /><br /> Once you have root privileges invoke cpan.<br /> <br /> # perl -MCPAN -e shell<br /><br /> cpan><br /><br /> Now install with prerequisites policy set to ask.<br /> <br /> cpan> o conf prerequisites_policy ask<br /> <br /> cpan> install Mail::SpamAssassin<br /> <br /> You will get lots of output as the necessary modules are downloaded and<br /> compiled and installed.<br /><br /> Step 2.<br /><br /> Configuration.<br /><br /> Edit the following "/etc/mail/spamassassin/local.cf"<br /><br /> Here is a look at my file<br /><br /> $ cat /etc/mail/spamassassin/local.cf<br /><br /><br /> # This is the right place to customize your installation of SpamAssassin.<br /> #<br /> # See 'perldoc Mail::SpamAssassin::Conf' for details of what can be<br /> # tweaked.<br /> #<br /> ###########################################################################<br /> #<br /> # rewrite_subject 0<br /> # report_safe 1<br /> # trusted_networks 212.17.35.<br /> #<br /> <br /> # Below added from book<br /> # You may want to set this to 5, then, work your way down.<br /> # Currently I have this 3<br /> required_hits 3<br /> <br /> # This determines how spam is reported. Currently safe email is reported<br /> # in the message.<br /> report_safe 1<br /> <br /> # The will rewrite the tag of the spam message.<br /> rewrite_subject 1<br /> <br /> # By default, SpamAssassin will run RBL checks. If your ISP already<br /> # does this, set this to 1.<br /> skip_rbl_checks 0<br /><br /> Step 3.<br /><br /> Update .procmail.<br /><br /> You should update the .procmail file as follows. Here is my /home/chirico/.procmail file.<br /><br /><br /> $ cat /home/chirico/.procmailrc<br /><br /> PATH=/bin:/usr/bin:/usr/local/bin<br /> MAILDIR=/var/spool/mail<br /> DEFAULT=/var/spool/mail/chirico<br /> LOGFILE=/home/chirico/MailBAG<br /> MYHOME=/home/chirico<br /> # Must have folder MailTRASH<br /> TRASH=/home/chirico/MailTRASH<br /> <br /> # Will get everything from this mail<br /> :0<br /> * ^From:.*sporkey@comcast.net<br /> $DEFAULT<br /> <br /> # Spamassassin<br /> :0fw<br /> * <300000<br /> |/usr/local/bin/spamassassin<br /><br /> Reference:<br /> http://pm-doc.sourceforge.net/<br /><br /><br /><br />TIP 93:<br /><br /> Make Graphs: using dot and neato.<br /><br /> $ dot -Tpng dotfile -o myout.png<br /><br /> To see the output reference the following:<br /> http://souptonuts.sourceforge.net/code/myout.png<br /><br /> Where "dotfile" is the following:<br /><br /> $ cat dotfile<br /><br /> digraph g<br /> {<br /> node [shape = record];<br /><br /> node0 [ label ="<f0> stuff | <f1> J | <f2> "];<br /> node1 [ label ="<f0> | <f1> E | <f2> "];<br /> node4 [ label ="<f0> | <f1> C | <f2> "];<br /> node6 [ label ="<f0> | <f1> I | <f2> "];<br /> node2 [ label ="<f0> | <f1> U | <f2> "];<br /> node5 [ label ="<f0> | <f1> N | <f2> "];<br /> node9 [ label ="<f0> | <f1> Y | <f2> "];<br /> node8 [ label ="<f0> | <f1> W | <f2> "];<br /> node10 [ label ="<f0> | <f1> Z | <f2> "];<br /> node7 [ label ="<f0> | <f1> A | <f2> "];<br /> node3 [ label ="<f0> | <f1> G | <f2> "];<br /><br /><br /> "node0":f0 -> "node1":f1;<br /> "node0":f2 -> "node2":f1;<br /><br /> "node1":f0 -> "node4":f1;<br /> "node1":f2 -> "node6":f1;<br /> "node4":f0 -> "node7":f1;<br /> "node4":f2 -> "node3":f1;<br /><br /> "node2":f0 -> "node5":f1;<br /> "node2":f2 -> "node9":f1;<br /><br /> "node9":f0 -> "node8":f1;<br /> "node9":f2 -> "node10":f1;<br /> }<br /><br /> Checkout the following article:<br /> http://www.linuxjournal.com/article.php?sid=7275<br /><br /> To download this software<br /> http://www.graphviz.org/<br /><br /><br /><br />TIP 94:<br /><br /> Makefile: working with conditions<br /><br /><br /> First note that all the indentations of the file must be<br /> a single tab. There cannot be any spaces, or make will<br /> not run.<br /><br /> $ cat Makefile<br /><br /> # Compiler flags<br /> sqliteLIB := $(shell ls /usr/local/lib/libsqlite.so)<br /> sqlite3LIB := $(shell ls /usr/local/lib/libsqlite3.so)<br /> # all assumes sqlite and sqlite3 are installed<br /> #<br /><br /> test:<br /> ifeq ("$(sqlite3LIB)","/usr/local/lib/libsqlite3.so")<br /> @echo -e "True -- we found the file"<br /> else<br /> @echo "False -- we did not find the file"<br /> endif<br /><br /><br /> So, if I run make I will get the following output.<br /><br /> $ make<br /> True -- we found the file<br /><br /> This is because I have a file /usr/local/lib/libsqlite3.so on my system.<br /> Note how the assignment is made, with the shell command<br /><br /> sqlite3LIB := $(shell ls /usr/local/lib/libsqlite3.so)<br /><br /><br /><br />TIP 95:<br /><br /> Bash: Conditional Expressions<br /><br /> if [ -e /etc/ntp.conf ]<br /> then<br /> echo "You have the ntp config file"<br /> else<br /> echo "You do not have the ntp config file"<br /> fi<br /><br /> Now using an AND condition inside the [ ]. By the way, above, you<br /> can put the "then" on the same line as the if "if [ -e /etc/ntp.conf ]; then"<br /> as long as you use the ";".<br /><br /> if [ \( -e /etc/ntp.conf \) -a \( -e /etc/ntp/ntpservers \) ]<br /> then<br /> echo "You have ntp config and ntpservers"<br /> elif [ -e /etc/ntp.conf ]; then<br /> echo " You just have ntp.conf "<br /> elif [ -e /etc/ntp/ntpservers ]; then<br /> echo " You just have ntpservers "<br /> else<br /> echo " you have neither ntp.conf or ntpservers"<br /> fi<br /><br /> A few things to note above. Else if statement is written as "elif", and when<br /> dealing with "(" you will need to insert "\(". By the way "-o" can replace "-a"<br /> and the "-o" is for OR condition. AND can be done as follows too.<br /><br /> if [ -e /etc/ntp.conf ] && [ -e /etc/ntp/ntpservers ]<br /> then<br /> echo "You have ntp config and ntpservers"<br /> elif [ -e /etc/ntp.conf ]; then<br /> echo " You just have ntp.conf "<br /> elif [ -e /etc/ntp/ntpservers ]; then<br /> echo " You just have ntpservers "<br /> else<br /> echo " you have neither ntp.conf or ntpservers"<br /> fi<br /><br /> Conditional Expressions (files).<br /><br /><br /> -b file True if file exists and is a block file<br /> -c file True if file exists and is a character device file<br /> -d file True if file exists and is a directory<br /> -e file True if file exists<br /> -f file True if file exists and is a regular file<br /> -g file True if file exists and is set goup id<br /> -G file True if owned by the effective group ID<br /><br /> -k file True if "sticky" bit is set and file exists<br /> -L file True if file exists and is a symbolic link<br /> -n string True if string is non-null<br /><br /> -O file Ture if file exists and is owned by the effective user ID<br /><br /> -p file True if file is a named pipe (FIFO)<br /> -r file True if file is readable<br /> -s file True if file has size > 0<br /> -S file True if file exists and is a socket<br /><br /> -t file True if file is open and refers to a terminal.<br /> -u file True if setuid bit is set<br /> -w file True if file exists and is writeable<br /> -x file True if file executable<br /> -x dir True if directory can be searched<br /><br /> file1 -nt file2 True if file1 modification date newer than file2<br /> file1 -ot file2 True if file1 modification date older than file2<br /> file1 -ef file2 True if file1 and file2 have same inode<br /><br /> Conditional Expressions (Integers).<br /><br /> -lt Less than<br /> -le Less than or equal<br /> -eq Equal<br /> -ge Greater than or equal<br /> -gt Greater than<br /> -ne Not equal<br /><br /> Example usage.<br /><br /> #!/bin/bash<br /> {<br /> while read num value; do<br /> if [ $num -gt 2 ]; then<br /> echo $value<br /> fi<br /> done<br /> } < somefile<br /><br /><br /> Conditional Expressions (Strings).<br /><br /> str1 = str2 str1 matches str2<br /> str1 != str2 str1 does not matches str2<br /> str1 < str2 str1 is less than str2<br /> str1 > str2 str1 is greater than str2<br /> -n str1 str1 is not null (length greater than 0)<br /> -z str1 str1 is null (las length 0)<br /><br /><br /><br />TIP 96:<br /><br /> CVS: Working with cvs<br /><br /> INITIAL REPOSITORY:<br /><br /> To create a repository, and this is normally done by the system admin. This<br /> is NOT creating a project to checkout, but the location where everything<br /> will be stored! The initial repository!<br /><br /> cvs -d repository_root_directory init<br /><br /> Or here is a specific example:<br /><br /> cvs -d /work/cvsREPOSITORY/ init<br /><br /> Creating a directory tree from scratch. For a new project, the easiest thing to<br /> do is probably to create an empty directory structure, like this:<br /><br /> $ mkdir sqlite_examples<br /> $ mkdir sqlite_examples/man<br /> $ mkdir sqlite_examples/testing<br /><br /><br /> After that, you use the import command to create the<br /> corresponding (empty) directory structure inside the repository:<br /><br /><br /> $ cd <directory><br /> $ cvs -d repository_root_directory import -m "Created directory structure" yoyodyne/dir yoyo start<br /><br /> Or, here is a specific example.<br /><br /> $ cd sqlite_examples<br /> $ cvs -d /work/cvsREPOSITORY/ import -m 'test SQlite' sqlite_examples sqlite_examples start<br /><br /> Now, you can delete the directory sqlite_examples, or go to another directory and type<br /> the following:<br /><br /> $ cvs -d /work/cvsREPOSITORY/ co sqlite_examples<br /><br /> COOL TOOLS:<br /><br /> 1. cvsps<br /> 2. cvsreport<br /><br /> cvsps which you can find at http://www.cobite.com/cvsps/cvsps-2.0rc1.tar.gz<br /><br /> $ cvsps -f README_sqlite_tutorial.html<br /><br /><br /><br />TIP 97:<br /><br /> Common vi and vim commands<br /><br /> Command mode ESC<br /><br /> dd delete<br /> u undelete<br /> y yank (copy to buffer)<br /> p/P p before cursor/P after cursor<br /><br /> Ctl-g show current line number<br /> shft-G end of file<br /> n shft-G move to line n<br /><br /> /stuff/ search<br /> n repeat in same direction<br /> N repeat in opposite direction<br /> /return repeat seach forward<br /> ?return repeat seach backward<br /><br /> "dyy Yank current line to buffer d<br /> "a7yy Yank next 7 lines to buffer a<br /> or<br /> :1,7ya a Yank [ya] lines 1,7 to buffer a<br /> :1,7ya b Yank [ya] lines 1,7 to buffer b<br /><br /> :5 pu b Put [pu] buffer b after line 5<br /><br /> "dP Put the content of buffer d before cursor<br /> "ap Put the contents of buffer a after cursor<br /><br /> :1,4 w! file2 Write lines 1,4 to file2<br /> :1,3<br /><br /> :set nu Display line numbers<br /> :set nonum Turns off display<br /><br /> :e <filename> Edit a file in a new buffer<br /><br /> vim<br /> :split<br /> :split <filename><br /> :sp <filename><br /> :split new<br /><br /> ctl-w To move between windows<br /> ctl-w+<br /> ctl-w- To change size<br /> ctl+wv Split windows vertically<br /> ctl-wq Close window<br /><br /> :only To view only 1 window<br /><br /> vim dictionary - put the following command in ~/.vimrc<br /><br /> set dictionary+=/usr/share/dict/words<br /> set thesaurus+=/usr/share/dict/words<br /> <br /> Now, after you type a word <ctl-x><ctl-k><ctl-n> and to<br /> go back in the listing <ctl-p><br /><br /> butter<ctl-x><ctl-k><ctl-n><br /><br /><br /><br />TIP 98:<br /><br /> Using apt-get<br /><br /> $ apt-get update<br /> $ apt-get -s install <pkage> <---- if everything is ok, then, remove the s<br /><br /> Note you may want to use dpkg to purge if you have to do a reinstall.<br /><br /> $ dpkg --purge exim4-base<br /> $ dpkg --purge exim4-config<br /> $ apt-get install exim4<br /><br /> $ dpkg-reconfigure exim4-config<br /> <br /><br /><br />TIP 99:<br /><br /> Mounting a cdrom on openbsd and installing packages<br /><br /> $ mkdir -p /cdrom<br /> $ mount /dev/cd0a /cdrom<br /> $ cd /cdrom<br /><br /> To add packages<br /><br /> $ pkg_add -v <directory><br /><br /> Mounting a cdrom on linux to a user's home sub-directory:<br /><br /> $ mkdir -p /home/chirico/cdrom<br /> $ mount /dev/cdrom /home/chirico/cdrom<br /><br /><br /><br />TIP 100:<br /><br /> Creating a boot floppy for knoppix cd:<br /><br /> $ dd if=/mnt/cdrom/KNOPPIX/boot.img of=/dev/fd0 bs=1440k<br /><br /> References:<br /> http://www.knoppix.net/docs/index.php/BootFloppyHowTo<br /><br /> For a lot of the knoppix how-to's<br /> http://www.knoppix.net/docs/index.php/<br /><br /><br /><br />TIP 101:<br /><br /> Diction and Style Tools for Linux http://ftp.gnu.org/gnu/diction/<br /><br /> $ diction mytext|less<br /><br /> Or, this can be done interactively<br /><br /> $ diction<br /> This is more text to read and you can do with it<br /> what you want.<br /> (stdin):1: This is more text to read and you [can -> (do not confuse with "may")] do with it what you want.<br /><br /> DESCRIPTION<br /> Diction finds all sentences in a document, that contain phrases from a<br /> database of frequently misused, bad or wordy diction. It further<br /> checks for double words. If no files are given, the document is read<br /> from standard input. Each found phrase is enclosed in [ ] (brackets).<br /> Suggestions and advice, if any, are printed headed by a right arrow ->.<br /> A sentence is a sequence of words, that starts with a capitalised word<br /> and ends with a full stop, double colon, question mark or exclaimation<br /> mark. A single letter followed by a dot is considered an abbreviation,<br /> so it does not terminate a sentence. Various multi-letter abbrevia-<br /> tions are recognized, they do not terminate a sentence as well.<br /><br /><br /><br />TIP 102:<br /><br /> Using a mail alias.<br /><br /> Suppose all root mail on your system to go to one root account root@main.com<br /><br /> In the following file:<br /><br /> /etc/aliases<br /><br /> Add this line<br /><br /> root: root@main.com<br /><br /> Next, run newaliases [/usr/bin/newaliases] as follows:<br /><br /> $ newaliases<br /><br /><br /> Special note: It's possible to send mail to more than one address. Suppose you want<br /> mail going to root@main.com above, plus you want it going to user donkey<br /> on the local system.<br /><br /> root: root@main.com donkey<br /><br /><br /><br />TIP 103:<br /><br /> Chrony - this service is similiar to ntp. It keeps accurate time<br /> on your computer against a very accurate clock in across<br /> a network with various time delays.<br /><br /> Reference: http://go.to/chrony<br /><br /> In the file "/etc/chrony/chrony.conf" add/replace the following<br /><br /> server 146.186.218.60<br /> server 128.118.25.3<br /> server 128.2.129.21<br /><br /> Next start the chrony service<br /><br /> $ /etc/init.d/chrony restart<br /><br /> Next verify that this is working. It may take 20 or 30 minutes to update<br /> the clock.<br /><br /><br /> Shell command:<br /> # chronyc<br /> chronyc> sourcestats<br /> 210 Number of sources = 3<br /> Name/IP Address NP NR Span Frequency Freq Skew Std Dev<br /> ========================================================================<br /> b50.cede.psu.edu 2 0 64 0.000 2000.000 4000ms<br /> otc2.psu.edu 2 0 66 0.000 2000.000 4000ms<br /> FS3.ECE.CMU.EDU 2 0 64 0.000 2000.000 4000ms<br /> chronyc><br /><br /> It is probably best to let chrony do its work. However, if you want to<br /> set both the hardware and software clock, the following will work:<br /><br /> Sets the hardware clock<br /> # hwclock --set --date="12/10/04 10:18:05"<br /> Sync the hardware clock to software<br /> # hwclock --hctosys<br /><br /> Normally the system keep accurate time with the software clock.<br /><br /><br /><br />TIP 104:<br /><br /> NFS mount<br /><br /> SERVER (192.168.1.182)<br /><br /> Make sure nfs is running on the server<br /><br /> $ /etc/init.d/nfs restart<br /><br /> At the server the contents of /etc/exports for<br /> allowing 2 computers (192.168.1.171 and 192.168.1.71)<br /> to access the home directory of this server. Note that<br /> read write (rw) access is allowed.<br /><br /> $ cat /etc/exports<br /> /home 192.168.1.171(rw)<br /> /home 192.168.1.71(rw)<br /><br /> Or, if you have a lot of clients on 192.168.1.* then consider<br /> the following:<br /><br /> /home 192.168.1.0/255.255.252.0(rw)<br /><br /> Next, still at the server, run the exportfs command<br /><br /> $ exportfs -rv<br /><br /> IPTABLES (lokkit). If you're using fedora with default lokkit firewall<br /> then you can put the following under "Other ports".<br /><br /> Other ports nfs:tcp nfs:udp<br /><br /><br /> If the above does not work or you are not using lokkit<br /> IPTABLES (values in /etc/sysconfig/iptables on SERVER )<br /><br /> # NFS Need to accept fragmented packets and may not have header<br /> # so you will not know where they are coming from<br /> -A INPUT -f -j ACCEPT<br /> -A INPUT -p tcp -m tcp -s 192.168.1.171 -m multiport --dports 111,683,686,685,1026,2049,2219 -j ACCEPT<br /> -A INPUT -p tcp -s 192.168.1.171 -d 0/0 --dport 32765:32768 -j ACCEPT<br /> -A INPUT -p udp -m udp -s 192.168.1.171 -m multiport --dports 111,683,686,685,1026,2049,2219 -j ACCEPT<br /> -A INPUT -p udp -s 192.168.1.171 -d 0/0 --dport 32765:32768 -j ACCEPT<br /> <br /> -A INPUT -f -j ACCEPT<br /> -A INPUT -p tcp -m tcp -s 192.168.1.71 -m multiport --dports 111,683,686,685,1026,2049,2219 -j ACCEPT<br /> -A INPUT -p tcp -s 192.168.1.71 -d 0/0 --dport 32765:32768 -j ACCEPT<br /> -A INPUT -p udp -m udp -s 192.168.1.71 -m multiport --dports 111,683,686,685,1026,2049,2219 -j ACCEPT<br /> -A INPUT -p udp -s 192.168.1.71 -d 0/0 --dport 32765:32768 -j ACCEPT<br /><br /> (Reference: http://nfs.sourceforge.net/nfs-howto/server.html)<br /> and<br /> (Reference: http://nfs.sourceforge.net/nfs-howto/security.html)<br /><br /><br /> CLIENT1 (192.168.1.171)<br /><br /> $ mkdir -p /home2<br /><br /> $ cat /etc/fstab<br /> 192.168.1.182:/home /home2 nfs rw 0 0<br /><br /> $ mount -a -t nfs<br /><br /> Or to do a one time mounting by hand<br /><br /> $ mount -t nfs 192.168.1.182:/home /home2<br /><br /> Now /home2 on the client will be /home on the server<br /><br /> Reference:<br /> http://nfs.sourceforge.net/nfs-howto/index.html<br /><br /> MONITOR NFS:<br /><br /> To monitor the client:<br /><br /> $ nfsstat -c<br /><br /> Also note you can "cat /proc/net/rpc/nfs" as well.<br /><br /> To monitor the server (note the -s instead of the -c).<br /><br /> $ nfsstat -s<br /><br /> Also note you can "cat /proc/net/rpc/nfsd" as well.<br /><br /><br /> The following "cat" command is done on the NFS server, and shows which<br /> clients are mounting. This does not go with examples above. By the way,<br /> "root_squash" is the default, and means that root access on the clients is<br /> denied. So, how does the client root get access to these filesystems? You have<br /> to "su - <someuser>".<br /><br /> $ cat /proc/fs/nfs/exports<br /> # Version 1.1<br /> # Path Client(Flags) # IPs<br /> /home 192.168.1.102(rw,root_squash,sync,wdelay)<br /> /home squeezel.squeezel.com(rw,root_squash,sync,wdelay)<br /> /home 192.168.1.106(rw,root_squash,sync,wdelay)<br /> /home livingroom.squeezel.com(rw,root_squash,sync,wdelay)<br /> /home 10.8.0.1(rw,root_squash,sync,wdelay)<br /> /home closet.squeezel.com(rw,root_squash,sync,wdelay)<br /><br /> (Reference: http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html#automount )<br /><br /><br /><br /><br />TIP 105:<br /><br /> Ports used for Microsoft products<br /> http://www.microsoft.com/canada/smallbiz/sgc/articles/ref_net_ports_ms_prod.mspx?pf=true<br /> Firewalling?<br /> http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/ServerHelp/428c1bbf-2ceb-4f76-a1ef-0219982eca10.mspx<br /><br /> To find out common port mappings, take a look at "/etc/services"<br /><br /><br /><br />TIP 106:<br /><br /> Man pages: If man pages are formatting incorrectly with PuTTY, try editing<br /> the "/etc/man.config" file with the following changes:<br /><br /> NROFF /usr/bin/groff -Tlatin1 -mandoc<br /> NEQN /usr/bin/geqn -Tlatin1<br /><br /> (Reference TIP 7 for using man)<br /><br /><br /><br />TIP 107:<br /><br /> Valgrind: check for memory leaks in your programs. (http://valgrind.org/)<br /><br /> This is how you can run it on the program "a.out" for valgrind version 2.2.0<br /><br /> $ valgrind --logfile=valgrind.output --tool=memcheck ./a.out<br /><br /> This is how you write the logfile "--log-file" for valgrind-3.0.1<br /><br /> $ valgrind --log-file=valgrind --leak-check=yes --tool=memcheck ./a.out<br /><br /> With C++ programs with gcc 3.4 and later that use STL, export GLIBCXX_FORCE_NEW<br /> only when testing to disable memory caching. Remember to enable for production<br /> as this will have a performance penalty. Reference http://valgrind.org/docs/FAQ/<br /><br /><br /><br />TIP 108:<br /><br /> Runlevel Configuring.<br /><br /> The program ntsysv, run as root, gives you a ncurses GUI to what will<br /> run on your system on boot. The chkconfig program (man chkconfig) has<br /> the ability to list which programs are set to start on the chosen<br /> run level.<br /><br /> # ntsysv<br /><br /> # chkconfig<br /><br /> If at this moment you want to see what services are currently running,<br /> then, run the following command:<br /><br /> # /sbin/service --status-all<br /><br /> Note, you can also set these manually. For example, normally you will<br /> have files in "/etc/init.d/" that will take parameters like "start","stop"<br /> "restart".<br /><br /> Take a look at "/etc/init.d/mysql" this file will start and stop the<br /> mysql daemon. So, how does know which run levels, and the order it gets<br /> loaded in the run level to other programs? By the K<number> and S<number><br /> values.<br /><br /> $ ls /etc/rc3.d/*mysql<br /><br /> /etc/rc3.d/K85mysql<br /> /etc/rc3.d/S85mysql<br /><br /> So here on my system the start value is 85. Looking in /etc/rc3.d, which is<br /> run level 3, any program with a lower number S84something will get loaded<br /> before mysql.<br /><br /> I manually set the run level as follows for mysql.<br /><br /> # cd /etc/rc3.d<br /> # ln -s ../init.d/mysql S85mysql<br /> # ln -s ../init.d/mysql K85mysql<br /><br /> # cd /etc/rc5.d<br /> # ln -s ../init.d/mysql S85mysql<br /> # ln -s ../init.d/mysql K85mysql<br /><br /> Note that I could have chose other numbers as well. "ntsysv" gives<br /> you a graphical interface.<br /><br /> This is a way of doing this with "chkconfig" at the command prompt.<br /><br /> # chkconfig --list mysqld<br /> mysqld 0:off 1:off 2:off 3:on 4:off 5:on 6:off<br /><br /> Above you can see it's on. Here's how we would have turned this on with chkconfig.<br /><br /> # chkconfig --level 35 mysqld on<br /><br /> Reference:<br /> http://www-128.ibm.com/developerworks/linux/library/l-boot.html?ca=dgr-lnxw99-obg-BootFast<br /><br /><br /><br /><br />TIP 109:<br /><br /> File Alteration Monitor - Gamin a FAM replacement<br /> http://www.gnome.org/~veillard/gamin/<br /> http://www.gnome.org/~veillard/gamin/sources/<br /> ****** EXAMPLE NOT COMPLETE *****<br /><br /> Working with fam - file alteration monitor. Mail uses this to signify<br /> a change in a file's status.<br /><br /> Below is the sample C program ftest.c which can be compiled as<br /> follows:<br /><br /> $ gcc -o ftest ftest.c -lfam<br /><br /> You will need to work with this as root<br /><br /> # ./ftest <somefile><br /><br /><br /> Reference:<br /> http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?db=man&fname=/usr/share/catman/p_man/cat3x/fam.z<br /> http://www.devchannel.org/devtoolschannel/04/05/13/2146252.shtml<br /><br /><br /><br />TIP 110:<br /><br /> glibc - this is the main library used by C, and the following<br /> link below gives you examples on everything from sockets,math,<br /> date and time functions, user environment, and much more.<br /><br /> http://www.gnu.org/software/libc/manual/html_mono/libc.html<br /><br /> How do you know which version of glibc you are running?<br /><br /> #include <stdio.h><br /> #include <gnu/libc-version.h><br /> int main (void)<br /> {<br /> puts (gnu_get_libc_version ());<br /> return 0;<br /> }<br /><br /><br /><br />TIP 111:<br /><br /> nslookup and dig - query Internet name servers interactively.<br /><br /> $ nslookup<br /> >chirico.org<br /> Server: 68.80.0.6<br /> Address: 68.80.0.6#53<br /><br /> Name: chirico.org<br /> Address: 66.35.250.210<br /> ><br /><br /> The nslookup command will query the dns server is "/etc/resolve.conf"<br /> However, you can force a certain dns with "- server". For example the<br /> command below goes to the server named dilbert<br /><br /> $ nslookup - dilbert<br /> ><br /><br /> dig:<br /><br /> dig gives you more information. You should probably use dig instead<br /> of nslookup.<br /><br /> Below I am forcing the lookup from DNS 68.80.0.6 of the name chirico.org, and<br /> note that the query time is return too.<br /><br /> $ dig @68.80.0.6 +qr chirico.org<br /><br /> ; <<>> DiG 9.2.1 <<>> @68.80.0.6 +qr chirico.org<br /> ;; global options: printcmd<br /> ;; Sending:<br /> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55908<br /> ;; flags: rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0<br /> <br /> ;; QUESTION SECTION:<br /> ;chirico.org. IN A<br /> <br /> ;; Got answer:<br /> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55908<br /> ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2<br /> <br /> ;; QUESTION SECTION:<br /> ;chirico.org. IN A<br /> <br /> ;; ANSWER SECTION:<br /> chirico.org. 5538 IN A 66.35.250.210<br /> <br /> ;; AUTHORITY SECTION:<br /> chirico.org. 30599 IN NS ns78.worldnic.com.<br /> chirico.org. 30599 IN NS ns77.worldnic.com.<br /> <br /> ;; ADDITIONAL SECTION:<br /> ns78.worldnic.com. 16022 IN A 216.168.225.218<br /> ns77.worldnic.com. 7 IN A 216.168.228.41<br /> <br /> ;; Query time: 155 msec<br /> ;; SERVER: 68.80.0.6#53(68.80.0.6)<br /> ;; WHEN: Thu Dec 23 07:48:23 2004<br /> ;; MSG SIZE rcvd: 127<br /><br /> So what if you wanted to know what name the IP address 66.35.250.210<br /> resolves to, when using dns 68.80.0.12.<br /><br /> $ dig @68.80.0.12 -x 66.35.250.210<br /> ...<br /> ;; ANSWER SECTION:<br /> 210.250.35.66.in-addr.arpa. 3600 IN CNAME 210.0/24.250.35.66.in-addr.arpa.<br /> 210.0/24.250.35.66.in-addr.arpa. 3600 IN PTR vhost.sourceforge.net.<br /><br /> Above you can see it resolved to "vhost.sourceforge.net"<br /><br /> Reference ( http://www.tldp.org/HOWTO/DNS-HOWTO-5.html )<br /> Also see TIP 223.<br /><br /><br /><br />TIP 112:<br /><br /> Using GNU Autotools - so you can produce the familiar "./configure" "make" and "make install"<br /> commands. There is also a "make dist".<br /><br /> The program sqlite3api.cc and the rest of this code can be found at<br /> http://prdownloads.sourceforge.net/cpearls/autotools.tar.gz?download<br /><br /><br /> A "Makefile.am" is required:<br /><br /> bin_PROGRAMS = sprog<br /> sprog_SOURCES = sqlite3api.cc<br /> sprog_LDADD = @INCLUDES@ @SQLIBOBJS@<br /><br /><br /> In addition, a "configure.in" file is required. Note, AC_CHECK_LIB will<br /> check the "libsqlite3.so" file for the "sqlite3_open" file. Note that<br /> "sqlite3", is a shortcut for "libsqlite3" by convention. If this file<br /> is not found, AC_CHECK_FILE looks for "/usr/local/lib/libsqlite3.a". If<br /> this is found, then, "-lsqlite3" is added to the LIBS environment variable.<br /> Also, "-I/usr/local/include" and "-L/usr/local/lib" will be added on the<br /> command line. This is common when some one does not have the library in<br /> the path. (See TIP 49)<br /><br /> dnl Process this file with autoconf to produce a configure script.<br /> AC_INIT(sqlite3api.cc)<br /> AM_INIT_AUTOMAKE(sqliteprog, 1.0)<br /> AC_PROG_CXX<br /> CXXFLAGS='-Wall -W -O2 -s -pipe'<br /> AC_CHECK_LIB(sqlite3,sqlite3_open,[],found=no)<br /> if test "$found" = "no"; then<br /> AC_CHECK_FILE(/usr/local/lib/libsqlite3.a, found=yes)<br /> if test "$found" = "yes"; then<br /> LIBS="$LIBS -lsqlite3"<br /> INCLUDES="$INCLUDES -I/usr/local/include"<br /> EXTRALIB='-L/usr/local/lib'<br /> else<br /> echo "Are you SURE sqlite3 is installed?"<br /> fi<br /> fi<br /> SQLIBOBJS='-Wl,-R/usr/local/lib'<br /> AC_SUBST(INCLUDES)<br /> AC_SUBST(SQLIBOBJS)<br /> AC_SUBST(EXTRALIB)<br /> AC_OUTPUT(Makefile)<br /><br /><br /> To build the configure file, just run the following:<br /><br /> $ aclocal<br /> $ autoconf<br /> $ touch NEWS README AUTHORS ChangeLog<br /> $ automake --add-missing<br /><br /> Now if you want to make a tar.gz file "sqliteprog-1.0.tar.gz", then<br /> all you have to run is the following:<br /><br /> $ make dist<br /><br /> Note: did you ever want to save all the output from a ./configure? Well, it<br /> is automatically saved in the "config.log" file. In fact, this file may<br /> contain a lot more than what you saw on the screen.<br /><br /> Also, you may need to rerun ./configure. But before you do, delete<br /> the "config.cache" file to get a clean build.<br /><br /><br /><br />TIP 113:<br /><br /> EMACS - common emacs commands.<br /><br /> M is the ESC<br /> C or c is the Ctl<br /><br /> Shell - when working in a shell. "M-x rename-uniquely" is good for split screen editing.<br /><br /> M-x rename-uniquely Use this for multiple shells (renames buffer so it's not the same shell)<br /> C-c C-z Send job in background (when working in a shell)<br /> C-c C-o commit-kill-output (gets rid of a lot of shell output)<br /> C-c C-r reposition at beginning of output<br /> C-c C-e reposition at end of output<br /> M-x send-invisible Hide passwords - use this before typing a password<br /><br /> Note: if the shell prompt does not show up correctly, then, you may want to creat a ".emacs_bash"<br /> file with the following contents:<br /><br /> PS1="emacs:\W \$ "<br /><br /> Directories (C-x d) give you a directory listing. You know all those annoying "~" and "#"<br /> file that you get? You can easily delete these when in "dired" mode by hitting<br /> "~", then "d" to flag it for delete. Then, hit "x" to and confirm deletion.<br /><br /> These are other command that work on highlighted files in "dired" mode.<br /><br /> R rename<br /> v view<br /> Z compress the file<br /> + create directory<br /><br /> Other common commands:<br /><br /> c-x l list the line you are on, and how many lines in the document.<br /> You will get something like: Page has 4881 lines (4440 + 442),<br /> which means you are on the 4440 line.<br /><br /> c-x rm bookmark make<br /> c-x rb bookmark bounce<br /> <br /> c-x rb notes<br /> c-x rb emacs<br /> <br /> c-x / <r> (save position in register <r>)<br /> c-x j <r> (jump to position in register <r>)<br /> c-x r SPC 1 (mark current point in register 1)<br /> c-x r j 1 (jump to marked point in register 1)<br /> c-x r t <string> (insert string into register)<br /> <br /> c-x r s 1 (save marked region in register 1)<br /> c-x r i 1 (insert marked region)<br /> <br /> c-x c-o (delete all blank lines, except one)<br /> <br /> c-x z (repeat the last command ... stop with an a)<br /> c-x zz (repeat the last command twice)<br /><br /> rectangle<br /> ---------<br /> C-SPC<br /> goto the next region<br /> C-x<br /> C-x<br /> then, C-x r r "name of register"<br /> <br /> to insert the register<br /> C-x r i "name of register"<br /> <br /> macros:<br /> -------<br /> c-x ( start macro<br /> c-x ) end macro<br /> c-x e execute macro<br /> <br /> mail:<br /> -----<br /> c-x m mail<br /> c-c c-s send<br /> <br /> C-x C-e<br /> (insert "\n\nExtra Line of text")<br /> <br /> ;; chirico functions in .emacs<br /> ;; This creates an html template<br /> (defun my-html ()<br /> (interactive)<br /> (insert "<html><br /> <head><br /> <meta equiv="\" content="\"><br /> <meta equiv="\" content="\"><br /> </head><br /> <body bgcolor="\"><br /> <br /> <br /> </body><br /> </html>")<br /> )<br /><br /> Backspace issues when using "emacs -nw"? They putting the following in your "~/.emacs" file<br /><br /> (global-set-key "\C-d" 'backward-delete-char)<br /> (global-set-key "\C-h" 'backward-delete-char)<br /> (global-set-key (kbd "DEL") 'delete-char)<br /><br /><br /><br />TIP 114:<br /><br /> ncftpget - an intelligent ftp client (http://www.ncftp.com/). Also<br /> check your fedora or debian install. This package allows<br /> you to easily download packages from ftp sites.<br /><br /> This is an example of connect to an ftp site, with a subdirectory, and<br /> downloading all in one command.<br /><br /> $ ncftpget ftp://ftp.gnu.org/pub/gnu/gcc/gcc-3.2.3/gcc-3.2.3.tar.gz<br /><br /> Of if you want to get the fedora core 3 installs<br /><br /> $ ncftpget ftp://ftp.linux.ncsu.edu/pub/fedora/linux/core/3/i386/iso/FC3*<br /><br /><br /><br />TIP 115:<br /><br /> expr - evaluate expressions. You can use this on the command line<br /><br /> $ expr 6 + 4<br /> 10<br /><br /> Note the spaces. Without spaces, you get the following:<br /><br /> $ expr 6+4<br /> 6+4<br /><br /> If you're using "*", you'll need a "\" before it<br /><br /> $ expr 10 \* 10<br /> 100<br /><br /> This also works for variables<br /><br /> $ var1=34<br /> $ expr $var1 + 3<br /> 37<br /><br /> or<br /><br /> $ var1=2<br /> $ var1=`expr $var1 \* 2`<br /> $ echo $var1<br /> 4<br /><br /> see (TIP 25) you can get the cosine(.23)<br /><br /> $ var1=`echo "c(.23)"|bc -l`<br /> $ echo $var1<br /> .97366639500537483696<br /><br /><br /> You can also do substrings:<br /><br /> $ expr substr "BigBear" 4 4<br /> Bear<br /><br /> And length of strings<br /><br /> $ mstr="12345"<br /> $ expr length $mstr<br /> 5<br /><br /> Regular expressions<br /><br /> $ expr "a3" : [a-z][1-9]<br /> 2<br /><br /> Or you can get a bit fancy<br /><br /> $ myexpr="[a-z][1-9]"<br /> $ echo $myexpr<br /> [a-z][1-9]<br /><br /> $ expr "a3" : $myexpr<br /> 2<br /><br /> This may not be the best way to find out if it is Friday, but<br /> it seems to work. It's more of an exercise in xargs.<br /><br /> $ date<br /> Fri Dec 31 16:44:47 EST 2004<br /> $ date|xargs -i expr {} : "[Fri]"<br /> 1<br /><br /><br /><br />TIP 116:<br /><br /> eval<br /><br /> $ mypipe="|"<br /> $ eval ls $mypipe wc<br /> 6 6 129<br /><br /> Did you catch that? The above statement is the same as<br /><br /> $ ls | wc<br /><br /> Where "|" is put into the variable $mypipe<br /><br /> (also see TIP 118)<br /><br /><br /><br />TIP 117:<br /><br /> lxr, glimpse, patchset - tools for reading the kernel source<br /><br /> This example puts some of the files in /home/src since my home<br /> partition is the largest. Plus, you do not want to over write<br /> the source in /usr/src/ If you want to put your files elsewhere<br /> just substitute /home/src for your desired directory.<br /><br /> patchset -- download and setup<br /><br /> $ export SRCDIR=/home/src<br /> $ cd $SRCDIR<br /> $ wget http://www.csn.ul.ie/~mel/projects/patchset/patchset-0.5.tar.gz<br /> $ export PATH=$PATH:$SRCDIR/patchset-0.5/bin<br /><br /> Now edit "/home/src/patchset-0.5/etc/patchset.conf" and set WWW_USER to<br /> whatever your website runs as<br /><br /> export WWW_USER=nobody<br /><br /> Getting kernel source. The last step builds and asks a lot of questions. Enter<br /> yes to things that interest you, since this is what you will see in the source<br /> code. It is not going to build for booting. The "downlaod -p" is for downloading<br /> a patch.<br /><br /> $ download 2.6.10<br /> $ createset 2.6.10<br /> $ make-kernel -b 2.6.10<br /><br /> glimpse -- download and setup<br /><br /> $ mkdir -p /home/src/glimpse<br /> $ cd /home/src/glimpse<br /> $ wget http://webglimpse.net/trial/glimpse-latest.tar.gz<br /> $ tar -xzf glimpse-latest.tar.gz<br /> $ cd glimpse-4.18.0<br /> $ ./configure; make<br /> $ make install<br /><br /> lxr -- download and setup<br /><br /> $ make -p /home/src/lxr<br /> $ cd /home/src/lxr<br /> $ wget http://heanet.dl.sourceforge.net/sourceforge/lxr/lxr-0.3.1.tar.gz<br /> $ cd lxr-0.3<br /><br /> Edit "Makefile" and set PERLBIN to "/usr/bin/perl" or the where perl is<br /> on your system. Also set INSTALLPREFIX to "/var/www/lxr". Then, as root<br /> do the following:<br /><br /> $ make install<br /><br /> Apache changes<br /><br /> Next edit the apache httpd.conf. On my system it is<br /> "/usr/local/apache2/conf/httpd.conf", but if you did a fedora install<br /> I think this file is located at "/etc/httpd/conf/httpd.conf".<br /><br /> Alias /lxr/ "/var/www/lxr/"<br /> <directory><br /> Options ExecCGI Indexes Includes FollowSymLinks MultiViews<br /> AllowOverride all<br /> Order allow,deny<br /> Allow from all<br /><br /> <files><br /> SetHandler cgi-script<br /> </files><br /> </directory><br /><br /> lxr - continued "/var/www/lxr/http/lxr.conf" changes. The following contains<br /> my lxr.conf with changes made to almost every variable. Make sure you use<br /> your website in place of 192.168.1.71<br /><br /> # Configuration file.<br /> <br /> # Define typed variable "v", read valueset from file.<br /> variable: v, Version, [/var/www/lxr/source/versions], [/var/www/lxr/source/defversion]<br /> <br /> # Define typed variable "a". First value is default.<br /> variable: a, Architecture, (i386, alpha, m68k, mips, ppc, sparc, sparc64)<br /> <br /> # Define the base url for the LXR files.<br /> baseurl: http://192.168.1.71/lxr/http/<br /> <br /> # These are the templates for the HTML heading, directory listing and<br /> # footer, respectively.<br /> htmlhead: /var/www/lxr/http/template-head<br /> htmltail: /var/www/lxr/http/template-tail<br /> htmldir: /var/www/lxr/http/template-dir<br /> <br /> # The source is here.<br /> sourceroot: /var/www/lxr/source/$v/<br /> srcrootname: Linux<br /> <br /> # "#include <foo.h>" is mapped to this directory (in the LXR source<br /> # tree)<br /> incprefix: /include<br /> <br /> # The database files go here.<br /> dbdir: /var/www/lxr/source/$v/<br /> <br /> # Glimpse can be found here.<br /> glimpsebin: /usr/local/bin/glimpse<br /> <br /> # The power of regexps. This is pretty Linux-specific, but quite<br /> # useful. Tinker with it and see what it does. (How's that for<br /> # documentation?)<br /> map: /include/asm[^\/]*/ /include/asm-$a/<br /> map: /arch/[^\/]+/ /arch/$a/<br /><br /> Now you should be ready to run "make-lxr". Make sure the path is setup to patchset,<br /> which is repeated here. The last step take awhile.<br /><br /> $ export SRCDIR=/home/src<br /> $ cd $SRCDIR<br /> $ export PATH=$PATH:$SRCDIR/patchset-0.5/bin<br /><br /> $ make-lxr 2.6.10<br /><br /> Now you need to index the source. Below the ./glimpse_* file will be put in<br /> root. Checkout the -H option if you do not want them here on a temporary<br /> bases of if you run out of room.<br /><br /> $ glimpseindex -o -t -w 5000 /var/www/lxr/source/2.6.10 >& .glimpse_out<br /><br /> Since the above put the files under /root/.glimpse_* they should be moved<br /><br /> $ mv /root/.glimps_* /var/www/lxr/source/2.6.10/.<br /> $ chown -R nobody.nobody ./.glimpse_*<br /><br /><br /><br />TIP 118:<br /><br /> exec - you can change standard output and input without starting a new<br /> process.<br /><br /> The exec redirect the output from ls and date to a file. Nothing<br /> is show on the terminal until "exec > /dev/tty" is performed<br /><br /> $ exec > mfile<br /> $ ls<br /> $ date<br /> $ exec > /dev/tty<br /><br /> This is an example of assigning file descriptor 3 to file "output3" for<br /> output, then, redirecting "ls" to this descriptor. Finally, file descriptor<br /> 3 is used for input, and the contents are read into the cat command.<br /><br /><br /> $ exec 3>output3<br /> $ ls >& 3<br /> $ exec 3<output3<br /> $ cat <&3<br /> ChangeLog<br /> CVS<br /> How_to_Linux_and_Open_Source.txt<br /> How_to_Linux_and_Open_Source.txt.~1.193.~<br /> mfile<br /> mfile2<br /> mfile3<br /> mftp<br /> output3<br /><br /> Could you redirect the output to 3 files and stderr?<br /><br /> $ exec 3>output3<br /> $ exec 4>output4<br /> $ exec 5>output5<br /><br /> $ ls >& 3 >& 4 >& 5 >& 2 // Nope, can't do this.<br /> output3 output4 output5<br /><br /> Instead, you should do the following:<br /><br /> $ ls | tee output3 | tee output4 |tee output5<br /><br /> Closing the "output" file descriptor<br /><br /> $ >&3-<br /><br /> Closing the "input" file descriptor<br /><br /> $ 3<&-<br /><br /> See what is still open on 0-10<br /><br /> $ lsof -a -p $$ -d 0-10<br /><br /> Recursion - the following counts to 5, then, quits.<br /><br /> #!/bin/bash<br /> sleep 1<br /> declare -x n<br /> let n=${n:=0}+1<br /> [ $n -le 5 ] && echo "$n" && exec $0<br /><br /> There are some real-life applications for this technique, as follows:<br /><br /> #!/bin/bash<br /> declare -x N<br /> declare -x n<br /> N=${N:=$(od -vAn -N1 -tu4 < /dev/urandom)}<br /> let n=${n:=0}+1<br /> [ $(($n%2)) -eq 0 ] && echo "She Loves Me!" || echo "She Loves Me NOT!"<br /> [ $n -lt $N ] && exec $0<br /><br /><br /><br />TIP 119:<br /><br /> runlevel - need to know the current runlevel?<br /><br /> $ who -r<br /> run-level 3 Dec 31 19:02 last=S<br /><br /> Need to know the architecture?<br /><br /> $ arch<br /> i686<br /><br /><br /><br />TIP 120:<br /><br /> at - executes commands at a specified time.<br /><br /> A few examples here. The 1970 program will run<br /> next Auguest 2 even though the year 1970 has long past.<br /><br /> $ at 6:30am Jan 12 < program<br /> $ at noon tomorrow < program<br /> $ at 1970 pm August 2 < program<br /><br /> This is an interactive way to use the command:<br /><br /> $ at now + 6 minutes<br /> warning: commands will be executed using (in order) a) $SHELL b) login shell c) /bin/sh<br /> at> ls<br /> at> date > /tmp/5min<br /> at> ^D<br /> job 3 at 2005-01-01 08:50<br /><br /> What jobs are in the queue?<br /><br /> $ atq<br /><br /> or<br /><br /> $ at -l<br /><br /><br /><br />TIP 121:<br /><br /> Creating a Manpage<br /><br /> As root you can copy the following to /usr/local/man/man1/soup.1 which will<br /> give you a manpage for soup.<br /><br /> .\" Manpage for souptonuts.<br /> .\" Contact mchirico@users.sourceforge.com to correct errors or omissions.<br /> .TH man 1 "04 January 2005" "1.0" "souptonuts man page"<br /> .SH NAME<br /> soup \- man page for souptonuts<br /> .SH SYNOPSIS<br /> soup<br /> .SH DESCRIPTION<br /> souptonuts is a collection of linux and open<br /> source tips.<br /> off for golf.<br /> .SH OPTIONS<br /> The souptonuts does not take any options.<br /> .SH SEE ALSO<br /> doughnut(1), golf(8)<br /> .SH BUGS<br /> No known bugs at this time.<br /> .SH AUTHOR<br /> Mike Chirico (mchirico@comcast.net mchirico@users.sourceforge.net)<br /><br /> So, to view this man page<br /><br /> $ man soup<br /><br /> It's also possible to compress<br /><br /> $ gzip /usr/local/man/man1/soup.1<br /><br /> For plenty of examples look at the other man pages. Also the following<br /> is helpful. The last one is a tutorial "man 7 mdoc"<br /><br /> $ man manpath<br /> $ man groff<br /> $ man 7 mdoc<br /><br /><br /><br />TIP 122:<br /><br /> dmesg - print out boot messages, or what is in the kernel ring buffer.<br /><br /> If you missed the messages on boot-up, you can use dmesg to print them.<br /><br /> $ dmesg > boot.msg<br /><br /> Or to print, then, clear the ring<br /><br /> # dmesg -c > boot.msg<br /><br /> (also see TIP 20)<br /><br /><br /><br />TIP 123:<br /><br /> gnus - emacs email nntp news reader (comcast as example with NO TLS or SSL)<br /><br /> First check that you can connect to the news group:<br /><br /> $ telnet newsgroups.comcast.net 119<br /> Trying 216.196.97.136...<br /> Connected to newsgroups.comcast.net.<br /> Escape character is '^]'.<br /> 200 News.GigaNews.Com<br /><br /> If you want to check for TLS or SSL see (TIP 54).<br /><br /> Here is a very simple configuration example without encryption. It<br /> appears that comcast does not support ssl or TLS.<br /><br /> In the "~/.emacs" file you would add the following to get comcast<br /> news groups<br /><br /> (setq gnus-select-method '(nntp "newsgroups.comcast.net"))<br /><br /> Then, create an "~/.authinfo" file with the following settings using<br /> you own username and password.<br /><br /> machine newsgroups.comcast.net login borkey@comcast.net password borkeypass0rd<br /><br /> Next create a "~/.newsrc" with your groups<br /><br /> news.announce.newusers:<br /> comp.lang.c++.moderated! 1-500<br /> comp.unix.programmer! 1-500<br /> comp.unix.shell! 1-500<br /> gnu.emacs.gnus! 1-500<br /><br /> Finally, create a "~/.gnus" with the following email settings for you<br /><br /> (setq user-mail-address "mchirico@comcast.net")<br /><br /> (defun my-message-mode-setup ()<br /> (setq fill-column 72)<br /> (turn-on-auto-fill))<br /> (add-hook 'message-mode-hook 'my-message-mode-setup)<br /><br /> To get into gnus<br /><br /> E-x gnus<br /><br /> The following are common gnus commands<br /><br /> RET view the article under the cursor<br /><br /> A A (shift-a, shift a): List all newsgroups known<br /> to the server.<br /><br /> l (lower-case L) : List only subscribed groups<br /> with unread articles.<br /><br /> L : List all newsgroups in .newsrc file.<br /><br /> g : See if new articles have arrived.<br /><br /> Some commands for reading<br /><br /> n next unread article<br /><br /> p previous article<br /><br /> SPC scroll down moves to next unread<br /> when at the bottom of the article<br /><br /> del scroll up<br /><br /> F follow-up to group on the article you are<br /> reading now.<br /><br /> f follow-up to group without citing the article<br /><br /> R reply by mail and cite the article<br /><br /> r reply by mail without citing the article<br /><br /> m new mail<br /><br /> a new posting<br /><br /> c Catchup<br /><br /> C-u / t Show only young headers<br /> / t without C-u limits the summary<br /> to old headers<br /><br /> T T toggle threading<br /><br /> C-u g Display raw article<br /> hit g to return to normal view<br /><br /> t Show all headers it's a toggle<br /><br /> W w Wordwrap the current article<br /><br /> W r Decode ROT13 a toggle<br /><br /> ^ fetch parent of article<br /><br /> L create a scorefile-entry based<br /> on the current article (low score)<br /> ? gives you information what each char means<br /><br /> I like L but high score<br /><br /> Commands to send email<br /><br /> C-c C-c send message<br /><br /> C-c C-d save message as draft<br /><br /> C-c C-k kill message<br /><br /> C-c C-m f attach file<br /><br /> M-q reformat paragraph<br /><br /><br />TIP 124:<br /><br /> Sending Email from telnet<br /><br /> Note, if you are on the computer you can sometime use the local loopback.<br /> In fact, sometimes you can only use the local loop back 127.0.0.1 in<br /> place of "bozo.company.com"<br /><br /> 1 [mchirico@soup Notes]$ telnet bozo.company.com 25<br /> 2 Trying 192.168.0.204...<br /> 3 Connected to bozo.company.com.<br /> 4 Escape character is '^]'.<br /> 5 220 bozo.company.com ESMTP Postfix (Postfix-20010228-pl03) (Mandrake Linux)<br /> 6 HELO fakedomain.com<br /> 7 HELO fakedomain.com // server echo<br /> 8 250 bozo.company.com<br /> 9 MAIL FROM: test@fakedomain.com<br /> 10 MAIL FROM: test@fakedomain.com // server echo<br /> 11 250 Ok<br /> 12 RCPT TO: mchirico@someother.com<br /> 13 RCPT TO: mchirico@someother.com // server echo<br /> 14 250 Ok<br /> 15 DATA<br /> 16 DATA // echo<br /> 17 354 Enter mail, end with "." on a line by itself<br /> 18 This is a test message<br /> 19 This is a test message<br /> 20 to send<br /> 21 to send<br /> 22 .<br /> 23 250 2.0.0 j0B0uH3L018469 Message accepted for delivery<br /><br /> Above on line 6 you can type in any domain name. Line 7 is an echo. All<br /> echos are listed in the comment field.<br /><br /><br /><br />TIP 125:<br /><br /> IP forwarding, IP Masquerade<br /><br /> # echo 1 > /proc/sys/net/ipv4/ip_forward<br /> # ipchains -F forward<br /> # ipchains -P forward DENY<br /> # ipchains -A forward -s 192.168.0.0/24 -j MASQ<br /> # ipchains -A forward -i eth1 -j MASQ<br /><br /><br /> This assumes that your internal network is 192.168.0.0 on eth1, and the<br /> internet is connected to eth0.<br /><br /> (Also See TIP 182)<br /><br /><br /><br />TIP 126:<br /><br /> Setting KDE as the default desktop manager<br /><br /> Edit "/etc/sysconfig/desktop" to include the two lines:<br /><br /> DESKTOP="KDE"<br /> DISPLAYMANAGER="KDE"<br /><br /><br /><br />TIP 127:<br /><br /> Have a file and you do not know whay type it is (tar, gz, ASCII, binary) ?<br /> Use the file command. Below it is used on the file "mftp"<br /><br /> $ file mftp<br /> mftp: Bourne-Again shell script text executable<br /><br /><br /><br />TIP 128:<br /><br /> Software RAID: Two good references<br /><br /> http://www.tldp.org/HOWTO/Software-RAID-HOWTO-1.html<br /> http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html<br /><br /> Note, you must setup grub for each RAID 1 device. Suppose you have<br /> 2 SCSI drives (sda and sdb). By default grub is setup on sda; but, you<br /> need to enable it for sdb (/dev/hdb for ide) as follows:<br /><br /> grub>device (hd0) /dev/sdb<br /> grub>root (hd0,0)<br /> grub>setup (hd0)<br /><br /> Checking if "/boot/grub/stage1" exists... no<br /> Checking if "/grub/stage1" exists... yes<br /> Checking if "/grub/stage2" exists... yes<br /> Checking if "/grub/e2fs_stage1_5" exists.. yes<br /> Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded.<br /> succeeded<br /> Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub<br /> .conf"... succeeded.<br /> Done.<br /><br /> grub><br /> grub>quit<br /><br /><br /> Checking to see if everything is working:<br /><br /> $ cat /proc/mdstat<br /><br /> Checking the drives<br /><br /> $ sfdisk -d /dev/sdb<br /> $ sfdisk -d /dev/sda<br /><br /> $ fdisk -l /dev/sda "This will give general information"<br /> $ fdisk -l "General information for all drives"<br /><br /> Adding raid (assume you want to add the first drive "sda1", or if it is the second<br /> drive then substitute "sda2" below )<br /><br /> $ raidhotadd /dev/md0 /dev/sda1<br /> $ raidhotadd /dev/md1 /dev/sda2<br /> $ raidhotadd /dev/md2 /dev/sda3<br /><br /> This is an example of an cat /proc/mdstat that is working. Note that<br /> there is a listing for both sda1[0] and sdb1[1]<br /><br /> $ cat /proc/mdstat<br /><br /> Personalities : [raid1]<br /> read_ahead 1024 sectors<br /> Event: 12<br /> md0 : active raid1 sda1[0] sdb1[1]<br /> 104320 blocks [2/2] [UU]<br /><br /> md1 : active raid1 sda2[0] sdb2[1]<br /> 1044160 blocks [2/2] [UU]<br /><br /> md2 : active raid1 sda3[0] sdb3[1]<br /> 34411136 blocks [2/2] [UU]<br /><br /> unused devices: <none><br /><br /> Compare that to this where md2 is missing sdb3<br /><br /> $ cat /proc/mdstat<br /><br /> Personalities : [raid1]<br /> read_ahead 1024 sectors<br /> Event: 9<br /> md0 : active raid1 sda1[0] sdb1[1]<br /> 104320 blocks [2/2] [UU]<br /><br /> md1 : active raid1 sda2[0] sdb2[1]<br /> 1044160 blocks [2/2] [UU]<br /><br /> md2 : active raid1 sdb3[1] <---- HERE<br /> 34411136 blocks [2/1] [_U]<br /><br /> unused devices: <none><br /><br /> If you are rebuilding an array, you can watch it by doing the following:<br /><br /> $ watch -n1 cat /proc/mdstat<br /><br /> Need to know the raid setup?<br /><br /> $ cat /etc/raidtab<br /><br /><br /><br /><br />TIP 129:<br /><br /> Resetting Redhat Linux Passwords using GRUB<br /><br /> 1. Press 'e'<br /> 2. Press 'e' again<br /> 3. Append 'single' to the kernel version listing<br /><br /> See<br /> http://linuxgazette.net/107/tomar.html<br /><br /><br /><br />TIP 130:<br /><br /> mtr - matt's traceroute. This is an advanced traceroute that keeps<br /> [http://www.bitwizard.nl/mtr/]<br /> $ mtr www.yahoo.com<br /><br /> Matt's traceroute [v0.52]<br /> third-fl-71.localdomain Thu Jan 20 11:05:57 2005<br /> Keys: D - Display mode R - Restart statistics Q - Quit<br /> Packets Pings<br /> Hostname %Loss Rcv Snt Last Best Avg Worst<br /> 1. 192.168.1.1 0% 3 3 0 0 0 1<br /> 2. ???<br /> 3. fe-2-6-rr01.willogrove5.pa.pa01 0% 3 3 8 7 7 8<br /> 4. srp-8-1-ar01.willowgrove1.pa.pa 0% 2 2 8 8 8 8<br /> 5. pos7-3-cr01.torresdale.pa.core. 0% 2 2 8 8 8 8<br /> 6. 12.119.53.53 0% 2 2 12 12 12 13<br /> 7. tbr1-p012401.phlpa.ip.att.net 0% 2 2 12 12 13 13<br /> 8. tbr1-cl8.n54ny.ip.att.net 0% 2 2 13 13 13 13<br /> 9. ggr2-p310.n54ny.ip.att.net 0% 2 2 12 12 13 14<br /> 10. so-1-0-0.gar4.NewYork1.Level3.n 0% 2 2 14 14 37 61<br /> 11. ae-1-54.bbr2.NewYork1.Level3.ne 0% 2 2 13 12 13 13<br /> 12. ge-0-3-0.bbr2.Washington1.Level 0% 2 2 19 19 19 19<br /> 13. ge-1-1-51.car1.Washington1.Leve 0% 2 2 18 18 19 20<br /> 14. 4.79.228.6 0% 2 2 21 19 20 21<br /> 15. UNKNOWN-216-109-120-201.yahoo.c 0% 2 2 21 20 20 21<br /> 16. w2.rc.vip.dcn.yahoo.com 0% 2 2 23 21 22 23<br /><br /><br /><br />TIP 131:<br /><br /> chfn - change finger information<br /><br /> $ chfn<br /><br /> Next you are asked for a password and user information.<br /><br /><br /><br />TIP 132:<br /><br /> chsh - change login shell<br /><br /> First, you may want to get a listing of all the possible<br /> shells.<br /><br /> $ chsh -l<br /><br /> /bin/sh<br /> /bin/bash<br /> /sbin/nologin<br /> /bin/ash<br /> /bin/bsh<br /> /bin/ksh<br /> /usr/bin/ksh<br /> /usr/bin/pdksh<br /> /bin/tcsh<br /> /bin/csh<br /> /bin/zsh<br /><br /><br /><br />TIP 133:<br /><br /> bash - working with binary, hex and base 3.<br /><br /> For the variable must be declare as an integer. Then<br /> specify the <base>#<value>. The example below is 22 in<br /> base 3.<br /><br /> $ declare -i n<br /> $ n=3#22<br /> $ echo $n<br /> 8<br /><br /> Base 16 (hex)<br /><br /> $ declare -i n2<br /> $ n2=16#a<br /> $ echo $n2<br /> 10<br /><br /> Base 8 (octal)<br /><br /> $ declare -i n3<br /> $ n3=8#11<br /> $ echo $n3<br /> 9 Note 8+1=9<br /><br /><br /><br />TIP 134:<br /><br /> monitoring IP traffic. Try iptraf http://iptraf.seul.org/<br /><br /><br /><br />TIP 135:<br /><br /> enscript - convert text files to PostScript<br /><br /><br /><br />TIP 136:<br /><br /> dd and tar - blocking factor. How to determine the blocking factor, block size<br /> so that tar and dd can work together.<br /><br /> Step 1: Create a large file on local disk, in a directory "1" that will eventually<br /> be written to tape. This will be created with dd as follows:<br /><br /> $ mkdir 1<br /> $ dd if=/dev/zero of=disk-image count=40960<br /> 40960+0 records in<br /> 40960+0 records out<br /><br /> $ cd ..<br /><br /> Step 2: tar the directory and contents to tape. First rewind the tape. These examples<br /> use /dev/nst0 as the location of the tape. Make sure to substitute your values<br /> if needed.<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ tar --label="Test 1" --create --blocking-factor=128 --file=/dev/nst0 1<br /><br /> Step 3: Read data from the tape using a block size of 128k. If you get an I/O error, which<br /> could happend if you used a different blocking factor above, then, you may need<br /> to increase the bs to 256, or 512 etc. as needed.<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ dd if=/dev/nst0 bs=128k of=testblocksz count=1<br /> 0+1 records in<br /> 0+1 records out<br /><br /> $ ls -l testblocksz<br /> -rw-r--r-- 1 root root 65536 Feb 9 10:41 testblocksz<br /><br /> $ ls -lh testblocksz<br /> -rw-r--r-- 1 root root 64k Feb 9 10:41 testblocksz<br /><br /> Note above that the size 65536 is equal to 64k. That "h" switch in "ls" is for<br /> human readable.<br /><br /><br /> Step 4: tar uses a multiplier of 512*blocking-factor to get block size. Again<br /><br /> 512 * blocking-factor = block size used in dd command.<br /><br /> Putting in the values, we see that<br /><br /> 512 * 128 = 65536<br /><br /><br /> Step 5: So what does this tell you? You can now use these numbers to "dd" files<br /> to tape. But, first tar will be used to create the file locally.<br /><br /> $ tar --label="Test 1" --create --blocking-factor=128 --file=test.tar 1<br /><br /><br /> Step 6: Send this to tape with the dd command. Remember 64k is equal to 65536.<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ dd if=test.tar bs=64k of=/dev/nst0<br /><br /><br /> Step 7: Now test that it can be read with tar command using blocking-factor=128.<br /> Note the "t" command in tar is for tell. It will not write data.<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ tar -tvf /dev/nst0 --blocking-factor=128<br /> V--------- 0/0 0 2005-02-09 10:38:20 Test 1--Volume Header--<br /> drwxr-xr-x root/root 0 2005-02-09 10:34:10 1/<br /> -rw-r--r-- root/root 20971520 2005-02-09 10:34:11 1/disk-image<br /><br /><br /> Step 8: Reading tape data with dd. Most of the time a high "ibs" input block size<br /><br /> $ mt -f /dev/nst0 rewind<br /> $ dd if=/dev/nst0 of=outfromdd.tar ibs=64k<br /> 321+0 records in<br /> 41088+0 records out<br /><br /><br /> Step 9: Verify that outfromdd.tar can be read by tar with blocking-factor=128<br /><br /> $ tar -tvf outfromdd.tar --blocking-factor=128<br /> V--------- 0/0 0 2005-02-09 10:38:20 Test 1--Volume Header--<br /> drwxr-xr-x root/root 0 2005-02-09 10:34:10 1/<br /> -rw-r--r-- root/root 20971520 2005-02-09 10:34:11 1/disk-image<br /><br /><br /> PULLING FILES: The dd command can be used to pull files.<br /><br /> ssh target_address dd if=remotefile | dd of=localfile<br /><br /> Or, a specific example of getting a file from a computer called hamlet.<br /><br /> $ ssh root@hamlet dd if=/home/cvs/test | dd of=/home/storage/test<br /><br /><br /> GOING BACKWARD AND FORWARD ON TAPE:<br /><br /> Go to end of data<br /> $ mt -f /dev/nst0 eod<br /><br /> Previous record<br /> $ mt -f /dev/nst0 bsfm 1<br /><br /> Forward record<br /> $ mt -f /dev/nst0 fsf 1<br /><br /> Rewind<br /> $ mt -f /dev/nst0 rewind<br /><br /> Tell<br /> $ mt -f /dev/nst0 tell<br /><br /> (Reference TIP 151 - for how to get around firewalls)<br /><br /> Below is a script that I use to backup computers via ssh. The<br /> tape drive is on "nis" and the extra space is on "hamlet".<br /><br /> #!/bin/bash<br /> # Program to backup server remotely<br /> # Assume remote server is nis, you are on squeezel<br /> #<br /> # Recover from tape<br /> #<br /> # dd if=/dev/nst0 of=test.tar.gz bs=64k<br /> #<br /> filename="support1.$(date "+%m%d%y%H%M").tar.gz"<br /> DIRTOBACKUP=/var/www<br /> #tar cvzf - $DIRTOBACKUP | ssh root@nis '(mt -f /dev/nst0 rewind; dd of=/dev/nst0 bs=64k )'<br /> tar cvzf - $DIRTOBACKUP | ssh support1@hamlet "dd of=/home/support1/backups/${filename} "<br /><br /> Another example program, below, pushes the last ".tar.gz" file to tape:<br /><br /> #!/bin/bash<br /> # Program to push files to tape<br /> #<br /> #<br /> # Notes on recovering from tape<br /> #<br /> # dd if=/dev/nst0 of=test.tar.gz ibs=64k<br /> # or<br /> # $ ssh root@tapeserver "mt -f /dev/nst0 rewind"<br /> # $ ssh root@tapeserver "dd if=/dev/nst0 ibs=64k"|dd of=cvs1.tar.gz<br /> #<br /> #<br /> #<br /> # First rewind tape<br /> ssh root@tapeserver 'mt -f /dev/nst0 rewind'<br /> #<br /> # Grab only the last file<br /> file=$(find /home/cvs -iname 'cvs*.tar.gz'|sort|tail -n 1)<br /> dd if=${file}|ssh root@tapeserver 'dd of=/dev/nst0 bs=64k'<br /><br /><br /><br />TIP 137:<br /><br /> Apache - redirecting pages. All changes are in httpd.conf<br /><br /> RedirectMatch (.*)\.gif$ http://www.anotherserver.com$1.jpg<br /><br /> Redirect /service http://foo2.bar.com/service<br /><br /><br /> If more than one DNS record points to the server, then, it's<br /> possible to redirect based upon which DNS entry was used in<br /> the web query.<br /><br /> For example, a single web server has the following three<br /> DNS entries mapped to its single IP address.<br /><br /> dev.mchirico.org<br /> notes.mchirico.org<br /><br /> It's possible to redirect or rewrite the page delivered to<br /> the client with the following changes in httpd.conf<br /><br /><br /> RewriteCond %{HTTP_HOST} ^dev.mchirico.org$<br /> RewriteRule ^/$ http://mchirico.org/dev [L]<br /><br /> RewriteCond %{HTTP_HOST} ^notes.mchirico.com$<br /> RewriteRule ^/$ http://mchirico.org/notes [L]<br /><br /><br /><br /><br />TIP 138:<br /><br /> samba mounts via ssh - mounting a samba share through an ssh tunnel, going<br /> through an intermediate computer, that accepts ssh. We'll call this<br /> intermediate computer middle [65.219.4.23], and we want to get to<br /> destination [192.168.0.81]. The user will be mchirico.<br /><br /> STEP 1:<br /><br /> $ mkdir -p /samba/share<br /><br /> STEP 2:<br /><br /> This has to be done as root, since we are using a lower port.<br /><br /> $ ssh -N -L 139:192.168.0.81:139 mchirico@65.219.4.23<br /><br /> STEP 3:<br /><br /> umount /samba/sales<br /> /bin/mount -t smbfs -o username=donkey,workgroup=donkeydomain,<br /> password=passw0rk1,port=139,dmask=770,fmask=660,<br /> netbiosname=homecpu //localhost/share /samba/share<br /><br /><br /><br /><br />TIP 139:<br /><br /> Music on Fedora Core -- How to play music on http://magnatune.com with "xmms".<br /><br /> The following command will show the sound driver:<br /><br /> $ lspci|grep -i audio<br /><br /><br /> STEP 1:<br /><br /> Unmute amixer with the following command:<br /><br /> $ amixer set Master 100% unmute<br /> $ amixer set PCM 100% unmute<br /><br /> Note you can also get a graphical interface with "alsamixer"<br /><br /> $ alsamixer<br /><br /> h,F1 -- for help<br /> Esc -- exit<br /> Tab -- move to selections<br /><br /><br /> STEP 2:<br /><br /> Test a sound file "*.au" with aplay. To quickly find files on your system use<br /> the "locate *.au" command.<br /><br /> $ aplay /usr/lib/python2.3/test/audiotest.au<br /><br /> STEP 3:<br /><br /> Install "xmms-mp3-1.2.10-9.2.1.fc3.rf.i386.rpm" which does not come with Fedora because<br /> of GPL license restrictions. The latest version of this package can be found<br /> at the following url:<br /><br /> http://rpmseek.com/rpm-pl/xmms-mp3.html<br /><br /><br /> $ rpm -ivh xmms-mp3-1.2.10-9.2.1.fc3.rf.i386.rpm<br /><br /> STEP 4:<br /><br /> Go to magnatun "http://magnatune.com/", select genre and make sure xmms<br /> is the default player.<br /><br /><br /><br />TIP 140:<br /><br /> Routing -- getting access to a network 1 hop away. You are currently on the 192 network<br /> and you want access to the 172.21.0.0 network that has a computer straddling<br /> the two, with /proc/sys/net/ipv4/ip_forward set to 1.<br /><br /><br /> $ route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.0.204<br /><br /> To undo:<br /><br /> $ route del -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.0.204<br /><br /> Now you can ping 172.21.0.21.<br /><br /> Does not work?<br /><br /> Go on to 192.168.0.204 and execute the following commands:<br /><br /> $ echo 1 > /proc/sys/net/ipv4/ip_forward<br /> $ cat /proc/sys/net/ipv4/ip_forward<br /> 1<br /><br /> To Look at the the gateway, execute the following command.<br /><br /> $ netstat -r<br /><br /> References:<br /><br /> http://lartc.org/lartc.html<br /><br /><br /><br />TIP 141:<br /><br /> RAM disk -- creating a filesystem in RAM.<br /><br /> $ mkfs -t ext3 -q /dev/ram1 4096<br /> $ mkdir -p /fsram<br /> $ mount /dev/ram1 /fsram -o defaults,rw<br /><br /><br /><br />TIP 142:<br /><br /> Create a Live Linux CDROM using BusyBox and OpenSSH.<br /><br /> These steps are rather long. A complete tutorial is given at<br /> the following link:<br /> http://prdownloads.sourceforge.net/souptonuts/instructions_boot_system.txt<br /><br /><br /><br />TIP 143:<br /><br /> SystemImager (http://www.systemimager.org/) SystemImager is software that automates Linux installs,<br /> software distribution, and production deployment.<br /><br /><br /><br /><br />TIP 144:<br /><br /> Mounted a filesystem in rescue mode, yet, you cannot read and write? Remount.<br /><br /> $ mount -o remount /<br /><br /><br /><br />TIP 145:<br /><br /> Nmap commands to check for Microsoft VPN connection.<br /><br /> $ nmap -sO -p 47 vpn1.someserver.com<br /> $ nmap -sS -p T:1723 vpn1.someserver.com<br /><br /> By the way, with nmap you can specify multiple ports. Below<br /> is an example of multiple ports; but, use the commands above<br /> for Microsoft VPN services.<br /><br /> $ nmap -sS -p T:1723-3000<br /><br /><br /><br />TIP 146:<br /><br /> Perl and ssh - monitoring systems. The output from ssh can be parsed. Below is<br /> a simple procedure to just to read the ssh ouput into perl.<br /><br /> #!/usr/bin/perl<br /> #<br /> $pid = open $readme, "ssh root\@hamlet df -lh|" or die "Could not ssh\n";<br /> while(<$readme>) {<br /> print $_<br /> }<br /> close $readme<br /><br /> But note, you probably want to do something more complex. Below is a more robust<br /> example that bypassed all the fortune, heading junk that you may encounter when<br /> logging in.<br /><br /> #!/usr/bin/perl<br /> #<br /> $pid = open $readme, "ssh root\@hamlet df -lh 2>/dev/null|" or die "Could not ssh\n";<br /> while(<$readme>) {<br /> print $_<br /> }<br /> close $readme<br /><br /> NO! you CANNOT do bidirectional communication with the open statement. Note the "|" before<br /> and after below, which cannot be done.<br /><br /> # Cannot do this!<br /> $pid = open $readme, "|ssh root\@hamlet df -lh 2>/dev/null|" or die "Could not ssh\n";<br /><br /> Below is a simple Perl example working with arrays:<br /><br /> #!/usr/bin/perl<br /> @ArrayOfArray = (<br /> [ "ant", "bee" ],<br /> [ "mouse", "mole", "rat" ],<br /> [ "duck", "goose", "flamingo" ],<br /> [ "rose","carnation","sunflower"],<br /> );<br /><br /> for $i ( 0 .. $#ArrayOfArray ) {<br /> for $j ( 0 .. $#{$ArrayOfArray[$i]} ) {<br /> print "Element $i $j is $ArrayOfArray[$i][$j]\n";<br /> }<br /> }<br /><br /> # Or this is another way to list elements<br /> foreach( @ArrayOfArray ) {<br /> foreach $i (0..$#$_) {<br /> print "$_->[$i] "<br /> }<br /> print "\n";<br /> }<br /><br /><br /> Below is an example of working with Hash of Arrays:<br /><br /> #!/usr/bin/perl<br /> # ./program < /etc/passwd<br /> while(<>){<br /> next unless s/^(.*?):\s*//;<br /> $HoA{$1} = [ split(/:/) ];<br /> }<br /> for $i (keys %HoA ) {<br /> print "$i: @{ $HoA{$i} } \n";<br /> }<br /><br /> Example of regular expression. This is my most used regular expression - I like<br /> this sample. See the "www.unix.org.ua" link at the end of this tip.<br /><br /> "hot cross buns" =~ /cross/;<br /> print "Matched: <$`> $& <$'>\n"; # Matched: <hot> cross <><br /> print "Left: <$`>\n"; # Left: <hot><br /> print "Match: <$&>\n"; # Match: <cross><br /> print "Right: <$'>\n"; # Right: <><br /><br /><br /> If you're looking for Perl information, type "man perl", which will show you how<br /> to get even more information. Or better yet, take a look at the following<br /> link:<br /><br /> http://www.unix.org.ua/orelly/perl/prog3/ch09_01.htm<br /> also<br /> http://www.stonehenge.com/merlyn/UnixReview/<br /><br /> For a quick example on using Perl with SQLite, see the following links:<br /><br /> http://prdownloads.sourceforge.net/souptonuts/README_sqlite_tutorial.html?download<br /> or<br /> http://freshmeat.net/articles/view/1428/<br /> or<br /> http://www.perl.com/pub/a/1999/09/refererents.html<br /><br /> Standard input for files. This example will read from stdin, or open a file if given as<br /> an argument, and convert all "<" to "<" and ">" to ">", which can be handy when<br /> converting text files to html files. Note the "while(<>)" will take multiple file names<br /> on the command line.<br /><br /> #!/usr/bin/perl<br /> while(<>) {<br /> s/&/&amp;/g;<br /> s/</&lt;/g;<br /> s/>/&gt;/g;<br /> s/</</g;<br /> s/>/>/g;<br /> print;<br /> }<br /><br /> Perl Debugger is very useful for testing commands and works like an interpreter, just<br /> like python. So to get into the Perl Debugger execute the command below, "q" to quit.<br /><br /> $ perl -de 0<br /><br /> Reference TIP 170<br /><br /><br /><br />TIP 147:<br /><br /> Shutdown<br /><br /> # shutdown 8:00 -- Shutdown at 8:00<br /><br /> # shutdown +13 -- Shutdown after 13min<br /><br /> # shutdown -r now -- Shutdown now and restart<br /><br /> # shutdown -k +2 -- "The system is going DOWN to maintenance mode in 2 minutes!"<br /> The above is only a warning.<br /><br /> # shutdown -h now -- Shutdown now and halt<br /><br /> # shutdown -c -- Cancel shutdown<br /><br /><br /><br />TIP 148:<br /><br /> ac - print statistics about users connect time<br /><br /> $ ac -p -- print hour usage by user (individual)<br /> $ ac -dy -- print daily usage<br /><br /> Options can also be combined<br /><br /> $ ac -dyp<br /><br /><br /><br />TIP 149:<br /><br /> Smart Monitoring Tools:<br /> Disk failing? Or want to know the temperature of your hard-drive?<br /><br /> http://smartmontools.sourceforge.net/<br /><br /> For a good, quick tutorial, see the Linux Journal article<br /> http://www.linuxjournal.com/article/6983<br /><br /> Below are some common commands:<br /><br /> $ smartctl -i /dev/hda<br /><br /> $ smartctl -Hc /dev/hda<br /><br /> $ smartctl -A /dev/hda<br /><br /><br /><br />TIP 150:<br /><br /> Monitor dhcp trafic - dhcpdump and tcpdump.<br /><br /> Download dhcpdump<br /><br /> $ wget http://voxel.dl.sourceforge.net/sourceforge/mavetju/dhcpdump-1.5.tar.gz<br /> $ ./configure<br /> $ make && make install<br /><br /> Once it's installed, you can monitor all dhcp traffic as follows, if done with root.<br /><br /> $ tcpdump -lenx -i eth0 -s 1500 port bootps or port bootpc| dhcpdump<br /><br /> The above assumes you are using eth0 (ethernet port 0).<br /><br /><br /><br />TIP 151:<br /><br /> Breaking Firewalls with ssh<br /><br /><br /> A sample .ssh/config file (note this must have chmod 600 rights)<br /><br /> ## Server1 ##<br /> Host 130.21.19.227<br /> LocalForward 20000 192.168.0.66:80<br /> LocalForward 22000 192.168.0.66:22<br /><br /> With the above "~/.ssh/config" file, after sshing into 130.21.19.227 it<br /> is then possible to ssh into nearby computers directly.<br /><br /> $ ssh -l mchirico 130.21.19.227<br /> $ scp -P 22000 authorized_keys* mchirico@localhost:.<br /> $ ssh -l mchirico localhost -p 22000<br /><br /> For the complete article reference the following link:<br /> http://souptonuts.sourceforge.net/sshtips.htm<br /><br /><br /><br />TIP 152:<br /><br /> Renaming files - suppose you want to rename all the ".htm" files to ".html"<br /><br /> $ rename .htm .html *.htm<br /><br /> Or, suppose you files file1, file2, file3 ...<br /><br /> $ touch file1 file2 file3 file4 file5 file6<br /> $ rename file file. file*<br /><br /> The above command will give you "file.1", "file.2" ... "file.6"<br /><br /><br /><br />TIP 153:<br /><br /> Renaming files with Perl - this is taken from "Programming Perl 3rd Edition"<br /><br /> #!/usr/bin/perl<br /> # rename - change filenames<br /> $op = shift;<br /> for (@ARGV) {<br /> $was = $_;<br /> eval $op;<br /> die if $@;<br /> # next line calls built-in function, not the script<br /> rename($was,$_) unless $was eq $_;<br /> }<br /><br /> The above Perl program can be used as follows:<br /><br /> $ rename 's/\.orig$//' *.orig<br /> $ rename 'y/A-Z/a-z/ unless /^Make/' *<br /><br /> Also reference:<br /> http://www.unix.org.ua/orelly/perl/prog3/<br /><br /><br /><br />TIP 154:<br /><br /> R project (http://www.r-project.org)<br /><br /> To start R, just type "R" at the command prompt and "q()" to quit. Below<br /> 2 is raised to powers 0 through 6 and thrown into an array.<br /><br /> $ R<br /> > N <- 2^(0:6)<br /> > N<br /> [1] 1 2 4 8 16 32 64<br /> ><br /><br /> There is a summary summary() command.<br /><br /> > summary(N)<br /> Min. 1st Qu. Median Mean 3rd Qu. Max.<br /> 1.00 3.00 8.00 18.14 24.00 64.00<br /><br /> Note that the array begins as 1 and not 0<br /><br /> > N[1:3]<br /> [1] 1 2 4<br /><br /><br /><br />TIP 155:<br /><br /> ls - listing files by size, with the biggest file listed last<br /><br /><br /> $ ls --sort=size -lhr<br /><br /> The above command sorts files by size, listing the contents in<br /> "h" human readable format in reverse order.<br /><br /> Note the options: --sort={none,time,size,extension}<br /><br /><br /><br />TIP 156:<br /><br /> Perl - program to clean up old versions of files<br /><br /> #!/usr/bin/perl<br /> # Copyright (c) GPL 2005 Mike Chirico<br /> # This program deletes old files from several directories<br /> # and within each directory there must be x number of copies<br /> # each y number of bytes<br /> #<br /><br /> sub delete_old_ones {<br /> $directory_and_file=$_[0];<br /> $save_count=$_[1];<br /> $bytes_in_file=$_[2];<br /> # Don't change setting here of '-lt'<br /> $pid = open $readme, "ls -lt $directory_and_file|" or die "Could not execute\n";<br /> while(<$readme>) {<br /> my @fields = split;<br /> # Make sure we have $save_count good ones with data<br /> if ($fields[4] > $bytes_in_file && $save_count > 0) {<br /> $save_count--;<br /> print "Kept files: $fields[4] $fields[8]\n";<br /> }<br /> # delete the old ones<br /> if ($save_count <= 0 )<br /> {<br /> print "Deleted files: $fields[4] $fields[8]\n";<br /> unlink $fields[8];<br /> }<br /> }<br /> close $readme;<br /> }<br /><br /><br /> @AofA = (<br /> [ "/home/cvs/backups/*.gz", "6",196621 ],<br /> [ "/home/mail/backups/*.gz","5",34 ],<br /> [ "/home/snort/backups/*.gz","2",34 ],<br /> [ "/home/server1/backups/*.gz","2",34 ],<br /> [ "/home/actserver/backups/*.gz","2",34 ],<br /> [ "/home/server2/backups/*.gz","2",34 ],<br /> );<br /><br /><br /> foreach( @AofA ) {<br /> &delete_old_ones($_->[0],$_->[1],$_->[2]);<br /> }<br /><br /> Reference TIP 170 and the following link:<br /> http://www.unix.org.ua/orelly/perl/prog3/<br /><br /><br /><br />TIP 157:<br /><br /> Graphics and Visualization Software that runs on Linux<br /> http://www.tldp.org/HOWTO/Scientific-Computing-with-GNU-Linux/graphvis.html<br /><br /><br /><br />TIP 158:<br /><br /> Keeping files in sync going both ways. Unlike rsync, this is not a one way mirror<br /> option.<br /><br /> You will need ocaml installed first.<br /><br /> $ wget http://caml.inria.fr/pub/distrib/ocaml-3.08/ocaml-3.08.3.tar.gz<br /> $ tar -xzf ocaml-3.08.3.tar.gz<br /> $ cd ocaml-3.08.3<br /><br /> $ ./configure<br /> $ make world<br /> $ make opt<br /> $ make install<br /><br /> Next, get unison and put it in a different directory.<br /> [http://www.cis.upenn.edu/~bcpierce/unison/]<br /><br /> $ wget http://www.cis.upenn.edu/~bcpierce/unison/download/stable/latest/unison-2.10.2.tar.gz<br /> $ tar -xzf unison-2.10.2.tar.gz<br /> $ cd unison-2.10.2<br /> $ make UISTYLE=text<br /> $ su<br /> # cp unison /usr/local/bin/.<br /><br /> Note, you have to copy the file manually.<br /><br /> See the following article [http://www.linuxjournal.com/article/7712]<br /><br /><br /><br />TIP 159:<br /><br /> Dump ext2/ext3 filesystem information with "dumpe2fs". Perform the mount command<br /> and query away.<br /><br /> $ dumpe2fs /dev/sda1<br /><br /><br /><br />TIP 160:<br /><br /> sysreport - a script that generates an HTML report on the system configuration. It<br /> gathers information about the hardware and is somewhat redhat specific. The utility<br /> should be run as root.<br /><br /> $ /usr/sbin/sysreport<br /><br /><br /><br />TIP 161:<br /><br /> Key Bindings Using bind. You can bind, say, ctl-t to a command.<br /><br /> Add the following to you "~/.inputrc" file, just as it is typed below with quotes.<br /><br /> "\C-t": ls -l<br /><br /> Next, run the command<br /><br /> $ bind -f .inputrc<br /><br /> Or, you can do everything on the command line; however, it won't be there the next time<br /> you log in. Below is the way to do everything on the command line.<br /><br /> $ bind -x '"\C-t":ls -l'<br /><br /> To unbind use the "-r" option. Single quotes are not needed.<br /><br /> $ bind -r "\C-t"<br /><br /> Getting a list of all bindings can be done as follows, and not this can be redirected<br /> to the ".inputrc" file for further editing.<br /><br /> $ bind -p > .inputrc<br /><br /><br /><br />TIP 162:<br /><br /> awk - common awk commands.<br /><br /> Find device names "sd" or with major number 4 and device name "tty". Print the<br /> record number NR, plus the major number and minor number.<br /><br /> $ awk '$2 == "sd"||$1 == 4 && $2 == "tty" { print NR,$1,$2}' /proc/devices<br /><br /> Find device name equal to "sound".<br /><br /> $ awk '/sound/{print NR,$1,$2}' /proc/devices<br /><br /> Print the 5th record, first field, in file test<br /><br /> $ awk 'NR==5{print $1}' test<br /><br /> Print a record, skip 4 records, print a record etc from file1<br /><br /> $ awk '(NR-1) % 4 == 0 {print $1}' file1<br /><br /> Print all records except the last one from file1<br /><br /> $ tac file1|awk 'NR > 1 {print $0}'|tac<br /><br /> Print A,B,C ..Z on each line, cycling back to A if greater than 26 lines<br /><br /> $ awk '{ print substr("ABCDEFGHIJKLMNOPQRSTUVWXYZ",(NR-1)%26+1,1),$0}' file1<br /><br /> Number of bytes in a directory.<br /><br /> $ ls -l|awk 'BEGIN{ c=0}{ c+=$5} END{ print c}'<br /><br /> Remove duplicate, nonconsecutive line. As an advantage over "sort|uniq"<br /> you can eliminate duplicate lines in an unsorted file.<br /><br /> $ awk '! a[$0]++' file1<br /><br /> Or the more efficient script<br /><br /> $ awk '!($0 in a) {a[$0];print}' file1<br /><br /> Print only the lines in file1 that have 80 characters or more<br /><br /> $ awk 'length < 80' file1<br /><br /> Print line number 25 on an extremely large file -- note it has<br /> to be efficient and exit after printing line number 25.<br /><br /> $ awk 'NR==25 {print; exit}' verybigfile<br /> <br /><br /><br /><br />TIP 163:<br /><br /> Configuring Remote Logging. If you have several servers on 192.168.1.0, you can setup remote logging<br /> as follows.<br /><br /> MAIN LOG SERVER (192.168.1.81):<br /><br /> Firewall - allow UDP port 514 on the main server that will receive the logs.<br /><br /> $ iptables -A INPUT -p udp -s 192.168.1.0/24 --dport 514 -j ACCEPT<br /><br /> Edit "/etc/sysconfig/syslog" and add the "-r" option to SYSLOGD_OPTIONS as shown below.<br /><br /> SYSLOGD_OPTIONS="-r -m -0"<br /><br /> Note, the "-r" is to allow remote logging and "-m 0" specifies that that the syslog process should<br /> not write regular timestamps. I prefer to only write timestamps for the clients.<br /><br /> Next, restart the logging process<br /><br /> $ service syslog restart<br /><br /> CLIENT LOG SERVER:<br /><br /> Edit "/etc/syslog.conf" and add the ip address of the log server, or put in the hostname.<br /><br /> *.* @192.168.1.81<br /><br /> Next, restart the logging process<br /><br /> $ service syslog restart<br /><br /><br /><br />TIP 164:<br /><br /> kudzu - hardware on your system. To probe the hardware on your system without doing<br /> anything, issue the following command.<br /><br /> $ kudzu -p<br /><br /> But wait, a lot of this information is already recorded in the following file<br /><br /> /etc/sysconfig/hwconf<br /><br /> You can also use lspci to list all PCI devices.<br /><br /> $ lspci<br /><br /> Also, take a look at the script /etc/sbin/sysreport, since this script has a lot of<br /> info gathering commands. You can pick and choose what you want, or run the complete<br /> report.<br /><br /> If you just want information on the NIC<br /><br /> $ ip link show eth0<br /> 2: eth0: <broadcast,multicast,up> mtu 1500 qdisc pfifo_fast qlen 1000<br /> link/ether 00:11:11:8a:be:3f brd ff:ff:ff:ff:ff:ff<br /><br /><br /><br /><br /><br />TIP 165:<br /><br /> cfengine - a very power agent for monitoring and administrating both a single computer<br /> and or multiple computers. [ http://www.cfengine.org/ ]<br /><br /> The following is a quick example on downloading and installing cfengine.<br /><br /> $ ncftpget ftp://ftp.iu.hio.no/pub/cfengine/cfengine-2.1.15.tar.gz<br /> $ md5sum cfengine-2.1.15.tar.gz<br /> f03de82709f84c3d6d916b6e557321f9 cfengine-2.1.15.tar.gz<br /><br /> $ tar -xzf cfengine-2.1.15.tar.gz<br /><br /><br /> You need to have a current version of BerkeleyDB (http://downloads.sleepycat.com/db-4.3.28.tar.gz).<br /> Note that BerkeleyDB has a funny install. You cd to the "build_unix" directory, then,<br /><br /> Installing BerkeleyDB if needed:<br /> $ wget http://downloads.sleepycat.com/db-4.3.28.tar.gz<br /> $ tar -xzf db-4.3.28.tar.gz<br /> $ cd db-4.3.28/build_unix/<br /> $ ../dist/configure<br /> make<br /> make install<br /><br /> You also need a current version of OpenSSL. For instructions on how to install OpenSSL see<br /> (http://souptonuts.sourceforge.net/postfix_tutorial.html).<br /><br /> See (TIP 49) on putting "/usr/local/BerkeleyDB.4.3/lib" in the "/etc/ld.so.conf" file. Or<br /> once BerkeleyDB is installed, you can put the location on the command line as follows:<br /><br /> Configuring cfengine with direct reference to BerkeleyDB.4.3. First cd to the cfengine source.<br /><br /> $ ./configure --with-berkeleydb=/usr/local/BerkeleyDB.4.3/lib<br /> $ make<br /> $ make install<br /><br /> Next create the following directories:<br /><br /> $ mkdir -p /var/cfengine/bin<br /> $ mkdir -p /var/cfengine/inputs<br /><br /> Copy needed files (cfagent, cfdoc, cfenvd, cfenvgraph, cfexecd, cfkey, cfrun, cfservd, cfshow):<br /><br /> $ cp /usr/local/sbin/cf* /var/cfengine/bin<br /><br /><br /> You'll also need to generate keys. As root, execute the following:<br /><br /> $ cfkey<br /><br /> The command above will write the public and private keys in<br /> "/var/cfengine/ppkeys".<br /><br /><br /> You probably want (cfexecd, cfservd, and cfenvd) running on all servers. If you<br /> add the following to "/etc/rc.local" these daemons will start on reboot.<br /><br /> # Lines in /etc/rc.local<br /> /usr/local/sbin/cfexecd<br /> /usr/local/sbin/cfservd<br /> /usr/local/sbin/cfenvd<br /><br /> Also, make sure you run each command now as follows:<br /><br /> $ /usr/local/sbin/cfexecd<br /> $ /usr/local/sbin/cfservd<br /> $ /usr/local/sbin/cfenvd<br /><br /> Firewall settings must be adjusted to allows 5308 for tcp/udp. My local network<br /> is 192.168.1.0, so I'm opening it up for all my computers.<br /><br /> $ iptables -A INPUT -p udp -s 192.168.1.0/24 --dport 5308 -j ACCEPT<br /> $ iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 5308 -j ACCEPT<br /><br /> A set of keys needs to be on the server and hosts. For example, my key on "tape.squeezel.com"<br /> should be copied over to the server "squeezel.squeezel.com" as follows:<br /><br /> This is done from tape.squeezel.com<br /><br /> $ scp /var/cfengine/ppkeys/localhost.pub root@squeezel.squeezel.com:/var/cfengine/ppkeys/root-tape.squeezel.com.pub<br /> $ scp root@squeezel.squeezel.com:/var/cfengine/ppkeys/localhost.pub /var/cfengine/ppkeys/root-squeezel.squeezel.com.pub<br /><br /> Also, "/var/cfengine/inputs/cfrun.hosts" on the server "squeezel.squeezel.com" must contain<br /> all the computers that will get updated. This is "cfrun.hosts" on "squeezel.squeezel.com"<br /><br /> closet.squeezel.com<br /> tape.squeezel.com<br /><br /> Once I'm done, from "tape.squeezel.com" I can run the following test:<br /><br /> $ cfrun squeezel.squeezel.com -v<br /><br /><br /><br /><br />TIP 166:<br /><br /> cfengine - a quick example. This example will be run as root. You create the file "cfagent.conf" in<br /> "/var/cfengine/inputs/". The example below will checksum all the files in /home/chirico/deleteme/tripwire,<br /> it will also comment out the line "finger" in any file located in /tmp/testdir/stuff, also appending<br /> the command in this file " Edit change with cfengine".<br /><br /> # /var/cfengine/inputs/cfagent.conf<br /> #<br /> # You run this with the following:<br /> # cfagent -vK<br /> <br /> control:<br /> actionsequence = ( files tidy editfiles )<br /> ChecksumDatabase = ( /var/cfengine/cache.db )<br /> # Below, true to update md5<br /> ChecksumUpdates = ( true )<br /> <br /> <br /> files:<br /> /home/chirico/deleteme/tripwire checksum=md5 recurse=inf<br /> /home/chirico/deleteme/tripwire/moredata checksum=md5 recurse=inf<br /> #/home/chirico/deleteme/tripwire/compress recurse=inf include=*.txt acti on=compress<br /> # If the database isn't secure, nothing is secure...<br /> /var/cfengine/cache.db mode=600 owner=root action=fixall<br /> <br /> tidy:<br /> /home/chirico/deleteme/tripwire pattern=*~ recurse=inf age=0<br /> # You must put an age. 0 runs now.<br /><br /> <br /> editfiles:<br /> <br /> { /tmp/testdir/stuff<br /><br /> HashCommentLinesContaining "finger"<br /> AppendIfNoSuchLine "# Edit Change with cfengine "<br /> }<br /><br /><br /> A few further notes on the above. The command "actionsequence = ( files tidy editfiles) tells the order<br /> of what to execute. The heading "tidy:" deletes files, and of course, "editfiles" does the editing of files.<br /><br /> To run the example, execute the following command. The "-K" causes the lock file to be ignored.<br /><br /> $ cfagent -vK<br /><br /><br /><br /><br />TIP 167:<br /><br /> Implementing Disk Quotas - a quick example that can easily be done on a live system for testing. There<br /> is no need to reboot, since you'll be creating a virtual filesystem.<br /><br /> Do the following as root. First create a mount point.<br /><br /> # mkdir -p /quota<br /><br /> Next, create 20M file. Since I have many of these files, I created a special directory "/usr/disk-img"<br /><br /> # mkdir -p /usr/disk-img<br /> # dd if=/dev/zero of=/usr/disk-img/disk-quota.ext3 count=40960<br /><br /> The dd command above create a 20 MB file because, by default, dd uses a block size of 512 bytes. That makes<br /> the size: 40960*512=20971520.<br /><br /> Next, format this as an ext3 filesystem<br /><br /> # /sbin/mkfs -t ext3 -q /usr/disk-img/disk-quota.ext3 -F<br /><br /> Add the following line to "/etc/fstab"<br /><br /> /usr/disk-img/disk-quota.ext3 /quota ext3 rw,loop,usrquota,grpquota 0 0<br /><br /> Now, mount this filesystem<br /><br /> # mount /quota<br /><br /> Take a look at it:<br /><br /> # ls -l /quota<br /> lost+found<br /><br /> Now, run "quotacheck"<br /><br /> # quotacheck -vug /quota<br /><br /> You'll get errors the first time this is run, because you have no quota files.<br /> But, run it a second time and you'll see something similiar to the following:<br /><br /> # quotacheck -vug /quota<br /> quotacheck: Scanning /dev/loop2 [/quota] done<br /> quotacheck: Checked 3 directories and 4 files<br /><br /> Now take a look at the files:<br /><br /> # ls -l /quota<br /> total 26<br /> -rw------- 1 root root 6144 Jun 14 12:23 aquota.group<br /> -rw------- 1 root root 6144 Jun 14 12:23 aquota.user<br /> drwx------ 2 root root 12288 Jun 14 12:18 lost+found<br /><br /> Next use "edquota" to grant the user "chirico" a certain quota<br /><br /> # edquota -f /quota chirico<br /><br /> This will bring up a menu, and here I have edited so that user "chirico"<br /> has a soft limit of 120*512=61K, and a soft limit of 2 inodes and a hard limit of 5.<br /><br /> Disk quotas for user chirico (uid 500):<br /> Filesystem blocks soft hard inodes soft hard<br /> /dev/loop2 2 120 150 1 2 3<br /><br /> Next, turn quotas on with the following command:<br /><br /> $ quotaon /quota<br /><br /> If you need to turn off quotas, the command is "quotaoff -a" for all filesystems. You'll run into<br /> errors if you try to run quotacheck, say "quotacheck -avug" because this tries to unmount and mount<br /> the filesystem. You need to turn off quotas first "quotaoff /quota". Note you only need to run<br /> quotacheck once, or when doing maintenance after a system crash.<br /><br /><br /> To get a report on the quote, runn "repquota" as follows:<br /><br /> $ repquota /quota<br /> *** Report for user quotas on device /dev/loop0<br /> Block grace time: 7days; Inode grace time: 7days<br /> Block limits File limits<br /> User used soft hard grace used soft hard grace<br /> ----------------------------------------------------------------------<br /> root -- 1189 0 0 2 0 0<br /> chirico -+ 93 0 0 4 2 5 6days<br /><br /><br /> Note above that user "chirico" has used 4 on the file limits. This user has a hard<br /> limit of 5. So when this user tries to create 2 more files (bring this over the limit of 5)<br /> then he will get the following error as demonstrated below.<br /><br /><br /> [chirico@squeezel chirico]$ touch one<br /> [chirico@squeezel chirico]$ touch two<br /> loop0: write failed, user file limit reached.<br /> touch: cannot touch `two': Disk quota exceeded<br /><br /><br /> Now, if repquota (run by root) is executed it shows the following:<br /><br /> $ repquota /quota<br /> *** Report for user quotas on device /dev/loop0<br /> Block grace time: 7days; Inode grace time: 7days<br /> Block limits File limits<br /> User used soft hard grace used soft hard grace<br /> ----------------------------------------------------------------------<br /> root -- 1189 0 0 2 0 0<br /> chirico -+ 94 0 0 5 2 5 6days<br /><br /><br /> Note the "+" sign above. User "chirico" is above the File soft limits, and in this case<br /> above the hard limits.<br /><br /> To warn user by sending email to them, run "warnquota", but you need check that<br /> "/etc/warnquota.conf" is setup correctly. For the example above, this file should<br /> look as follows:<br /><br /> $ cat /etc/quotatab<br /> #<br /> # This is sample quotatab (/etc/quotatab)<br /> # Here you can specify description of each device for user<br /> #<br /> # Comments begin with hash in the beginning of the line<br /><br /> # Example of description<br /> /dev/loop0: This is loopback device<br /><br /> Just run the following as root:<br /><br /> $ warnquota<br /><br /> By the way, if you want to change the grace period, it can only be done on a filesystem<br /> basis. Not per user.<br /><br /> $ edquota -t<br /><br /> Users can run "quota" to see their usage as follows:<br /><br /> [chirico@squeezel ~]$ quota<br /> Disk quotas for user chirico (uid 500):<br /> Filesystem blocks quota limit grace files quota limit grace<br /> /dev/loop0 94 0 0 5 10 50<br /><br /> As you can see from above, I changed my inode limit to 50.<br /><br /> What about running this on the whole filesystem? Yes, below is an example where I'm running<br /> this on FC3, on the root of the filesystem "/". This assumes that you have installed the<br /> quota package. Try doing "rpm -q quota" to see if this package is installed.<br /><br /> Step 1:<br /><br /> Check to make sure the quota software is installed. You can either do a "whereis quota",<br /> or check for the rpm package.<br /><br /> $ whereis quota<br /> whereis quota<br /> quota: /usr/bin/quota /usr/share/man/man1/quota.1.gz<br /><br /> Checking for the rpm package.<br /><br /> $ rpm -q quota<br /> quota-3.12-5<br /><br /> Step 2:<br /><br /> Edit /etc/fstab and add usrquota and grpquota options for "/dev/VolGroup00/LogVol00",<br /> which is shown on the first line below:<br /><br /> /dev/VolGroup00/LogVol00 / ext3 defaults,usrquota,grpquota 1 1<br /> LABEL=/boot /boot ext3 defaults 1 2<br /> none /dev/pts devpts gid=5,mode=620 0 0<br /> none /dev/shm tmpfs defaults 0 0<br /> none /proc proc defaults 0 0<br /> none /sys sysfs defaults 0 0<br /> /dev/VolGroup00/LogVol01 swap swap defaults 0 0<br /><br /> Step 3:<br /><br /> Remount the filesystem as follows:<br /><br /> $ mount -o remount /<br /><br /> Step 4:<br /><br /> Run quotacheck with the "-m" option. Like the above statement, this will have to be run with<br /> root priviliges. This creates the quota database files, and it can take a long time if it is<br /> a large full filesystem.<br /><br /><br /> $ quotacheck -cugm /<br /><br /> Step 5:<br /><br /> This step is optional, but it's good to know if you need to recalculate quotas because of a<br /> system crash. It's demonstrated here, because at this point quota's have not been turned on.<br /> Again, note the "m" option below.<br /><br /> $ quotacheck -avumg<br /><br /> Step 6:<br /><br /> Set limits for specific users or groups using the "edquota" command. Shown below is the command<br /> to setup quotas for user "chirico". Shown below this user has used 161560 blocks, he has a soft<br /> limit of 1161560 and a hard limit of 900000. He has used 3085 inodes and has a soft limit of 10000<br /> and a hard limit of 12000.<br /><br /> $ edquota -f / chirico<br /><br /> Disk quotas for user chirico (uid 500):<br /> Filesystem blocks soft hard inodes soft hard<br /> /dev/mapper/VolGroup00-LogVol00 161560 1161560 900000 3085 10000 12000<br /><br /> You can put quotas on groups as well. The following is done as root. See (TIP 186 and TIP 6) for creating<br /> groups and adding users to groups.<br /><br /> $ edquota -g share<br /><br /> If you create a sharable directory for anyone in the group "share" (TIP 6), quota restrictions against<br /> group "share" will only apply to files added in the "/home/share" directory. When user "chirico" creates<br /> files in "/home/share" they also go against this user quota as well. However, when files are created in<br /> his home directory they do not go against the "share" group.<br /><br /> Note - if you get errors when trying to run "edquota -g share", turn quotas off "quotaoff /" and<br /> run "quotacheck -avugm". Then, turn the quotas back on "quotaon /".<br /><br /> You can see the status of the group quota with the following command:<br /><br /> $ quota -g share<br /><br /> Step 7:<br /><br /> Turn on quotas with the "qutoaon" command. This command needs to be done with root privileges.<br /><br /> $ quotaon /<br /><br /> Step 8:<br /><br /> Check "/etc/quotatab" file for the correct entries. Note that when you do the "mount" command<br /> the filesystem returned needs to match what is in the "quotatab" file. I have noticed that this<br /> is not the case by default.<br /><br /> $ mount<br /> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw,usrquota,grpquota)<br /><br /> So the "/etc/quotatab" must contain the following line.<br /><br /> /dev/mapper/VolGroup00-LogVol00: This is the Volume group<br /><br /> Step 9:<br /><br /> Run "warnquota" as a check that the "/etc/quotatab" files is setup correctly.<br /><br /> $ warnquota<br /><br /> Step 10:<br /><br /> Setup a daily cron job for running "warnquota". The following should be placed<br /> in "/etc/cron.daily"<br /><br /> #!/bin/sh<br /> # Place this file in /etc/cron.daily<br /> # with rights 0755<br /> /usr/sbin/warnquota<br /> EXITVALUE=$?<br /> if [ $EXITVALUE != 0 ]; then<br /> /usr/bin/logger -t warnquota "ALERT exited abnormally with [$EXITVALUE]"<br /> fi<br /> exit 0<br /><br /> References:<br /> http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/ch-disk-quotas.html<br /> (TIP 6, TIP 186, and TIP 205)<br /><br /><br /><br />TIP 168:<br /><br /> rdist - remote file distribution client program. You can use this program in combination with<br /> ssh. This program does more than just copy files. Once a file has been copied, you can dictate<br /> other actions to be performed. Or you can hold off copying all together if the destination is<br /> running low on inodes, or disk space.<br /><br /> For the purpose of this example, all commands will been run on "squeezel.squeezel.com", and the<br /> computers that will be updated are "tape.squeezel.com" and "closet.squeezel.com". Obviously, you<br /> would substitute your computer names.<br /><br /> It helps to setup ssh keys on each computer first. Reference [http://souptonuts.sourceforge.net/sshtips.htm]<br /> and (TIP 12).<br /><br /> Step 1: Create the Configuration file myDistfile<br /><br /> Below is my sample "myDistfile". This file will access hosts "tape.squeezel.com" using username chirico<br /> and "closet.squeezel.com" with the username running this command, and copy the<br /> files "/home/chirico/file1" and "/home/chirico/file2" to the these two servers creating the<br /> directory ~/tmpdir if it doesn't exist. Once these files are updated, a mail check ("sendmail -bv")<br /> will be performed, and mail will be sent to "chirico@squeezel". This happens twice, once for each file.<br /><br /> Note, the line "/home/chirico/file2 -> tape.squeezel.com" which moves the file "file2" to<br /> tape.squeezel.com renaming the file to "tapedest" in the directory "/home/chirico". Once this file<br /> is copied, the rights are modified to "chmod +r". Likewise, "/home/chirico/file2 -> closet.squeezel.com"<br /> copies the file file2, which is renamed as closetdest.<br /><br /> # Contents of myDistfile<br /> HOSTS = ( chirico@tape.squeezel.com closet.squeezel.com )<br /><br /> FILES = ( /home/chirico/file1 /home/chirico/file2 )<br /><br /> ${FILES} -> ${HOSTS}<br /> # Directory tmpdir will be created if it doesn't exist<br /> install tmpdir ;<br /> special /home/chirico/file1 "/usr/sbin/sendmail -bv mchirico@gmail.com";<br /> notify chirico@squeezel;<br /><br /> /home/chirico/file2 -> tape.squeezel.com<br /> install /home/chirico/tapedest;<br /> special /home/chirico/tapedest "chmod +r /home/chirico/tapedest";<br /><br /> /home/chirico/file2 -> closet.squeezel.com<br /> install /home/chirico/closetdest;<br /><br /><br /> Step 2: Command from squeezel.squeezel.com to run myDistfile above<br /><br /> Below is the command that will execute the contents in "myDistfile". This command is run from the<br /> computer "squeezel.squeezel.com". All output will go in the file "cmd1rdist.log".<br /><br /> $ rdist -P /usr/local/bin/ssh -f ./myDistfile -l file=./cmd1rdist.log=all<br /><br /> Obviously you want a secure copy (using scp), so the -P option uses ssh as your secure<br /> transport mechanism.<br /><br /><br /><br />TIP 169:<br /><br /> Restricting root logins (/etc/securetty). ctl-alt-F4 will give you a prompt for tty3. Note<br /> that it is one number less. Take a look at the contents of "/etc/securetty". To prevent<br /> root from logging in on this device, take out tty3 from this listing. Note, you can always<br /> login as another user, then, su to root. Below is an example of the default<br /> "/etc/securetty" that allows root to login to everything.<br /><br /> [root@squeezel ~]# cat /etc/securetty<br /> console<br /> vc/1<br /> vc/2<br /> vc/3<br /> vc/4<br /> vc/5<br /> vc/6<br /> vc/7<br /> vc/8<br /> vc/9<br /> vc/10<br /> vc/11<br /> tty1<br /> tty2<br /> tty3<br /> tty4<br /> tty5<br /> tty6<br /> tty7<br /> tty8<br /> tty9<br /> tty10<br /> tty11<br /><br /><br /><br />TIP 170:<br /><br /> Perl map function. Try the following to get a quick take on this function,<br /> which increments each value in the array a;<br /><br /> #!/usr/bin/perl<br /> @a = (1,2,3);<br /> map {$_++} @a;<br /> map { print "$_\n" } @a;<br /><br /> or<br /><br /> #!/usr/bin/perl<br /> @a = (1,2,3);<br /> map { print "$_\n"} map {++$_} @a;<br /><br /> And you can easily make modifications, like reversing the order<br /><br /> #!/usr/bin/perl<br /> @a = (1,2,3);<br /> map { print "$_\n"} reverse map {++$_} @a;<br /><br /> Plus there is a grep() function that works on each element as well<br /><br /> #!/usr/bin/perl<br /> @a = (1,2,3);<br /> map { print "$_\n"} reverse grep{ $_ > 3} map {++$_} @a;<br /><br /> To get only odd numbers in reverse order:<br /><br /> #!/usr/bin/perl<br /> @a = (1,2,3);<br /> map { print "$_\n"} reverse grep{ !($_ % 2)} map {++$_} @a;<br /><br /><br /> Reference: http://www-128.ibm.com/developerworks/linux/library/l-road4.html<br /><br /><br /><br />TIP 171:<br /><br /> Perl - subroutine call and shifting through variables. A simple and useful<br /> technique.<br /><br /> #!/usr/bin/perl<br /> sub test {<br /> local $mval;<br /> while( $mval = shift ) {<br /> print " $mval\n";<br /> }<br /> }<br /><br /> test("one","two","three");<br /><br /><br /><br />TIP 172:<br /><br /> Tcp wrappers - First "/etc/hosts.allow" is check, and if there is an entry in this file, no more<br /> checking it done. If are no matches in "/etc/hosts.allow", the "/etc/hosts.deny" file is checked<br /> and if a match is found, that service is blocked for that host.<br /><br /> Example "/etc/hosts.deny" file:<br /><br /> sshd: 192.168.1.171<br /><br /> The above file blocks access to computer 192.168.1.171. It's also possible to run commands when<br /> someone from this computer tries to ssh in. This example sends mail.<br /><br /> sshd: 192.168.1.171: spawn (echo -e "%d %h %H %u"| /bin/mail -s 'hosts.deny entry' root)<br /><br /> Of course, you can also run commands in the "/etc/hosts.allow" if you wanted mail sent for a successful<br /> login.<br /><br /><br /><br />TIP 173:<br /><br /> pgrep, pkill - look up or signal process based on name and other attributes.<br /><br /> To quick find all instances of ssh running, for user root, execute the following<br /> command:<br /><br /> $ pgrep -u root -l ssh<br /><br /> To kill a process, or send a signal use the "pkill" option. For example, to<br /> make syslog reread its configuration file:<br /><br /> $ pkill -HUP syslogd<br /><br /> Another command command is "pidof" that can tell you how many processes are running.<br /> This can be useful for detecting DOS attacks.<br /><br /> $ pidof sshd<br /> 4783 4781 30008 30006 29888 29886 2246<br /><br /> Above there are 7 sshd's running. Reference "Tcpdump, Raw Socket and Libpap Tutorial"<br /> at [http://souptonuts.sourceforge.net/tcpdump_tutorial.html].<br /><br /><br /><br />TIP 174:<br /><br /> Password Cracking - tools to check your users passwords:<br /><br /> John The Ripper<br /> http://www.openwall.com/john/<br /><br /> Crack<br /> http://www.crypticide.com/users/alecm/<br /><br /> Slurpie<br /> http://www.ussrback.com/distributed.htm<br /><br /><br /><br />TIP 175:<br /><br /> Password Aging - setting the number of days a password is valid.<br /><br /> $ chage -M 90 <username><br /><br /><br /><br />TIP 176:<br /><br /> Kernel Performance Tuning - /Documentation/sysctl/vm.txt documents kernel settings to<br /> improve performance. Below are some examples.<br /><br /> overcommit_memory: 0 -- default estimates the amount of memory for malloc<br /> 1 -- kernel pretends there is always enough memory until it runs out<br /> 3 -- never overcommit<br /><br /> $ cat /proc/sys/vm/overcommit_memory<br /> 0<br /><br /> page-cluster:<br /> The Linux VM subsystem avoids excessive disk seeks by reading<br /> multiple pages on a page fault. The number of pages it reads<br /> is dependent on the amount of memory in your machine.<br /> <br /> The number of pages the kernel reads in at once is equal to<br /> 2 ^ page-cluster. Values above 2 ^ 5 don't make much sense<br /> for swap because we only cluster swap data in 32-page groups.<br /><br /> $ cat /proc/sys/vm/page-cluster<br /> 3<br /><br /> min_free_kbytes:<br /> This is used to force the Linux VM to keep a minimum number<br /> of kilobytes free. The VM uses this number to compute a pages_min<br /> value for each lowmem zone in the system. Each lowmem zone gets<br /> a number of reserved free pages based proportionally on its size.<br /><br /> $ cat /proc/sys/vm/min_free_kbytes<br /> 3831<br /><br /> max_map_count:<br /> This file contains the maximum number of memory map areas a process<br /> may have. Memory map areas are used as a side-effect of calling<br /> malloc, directly by mmap and mprotect, and also when loading shared<br /> libraries.<br /><br /> While most applications need less than a thousand maps, certain<br /> programs, particularly malloc debuggers, may consume lots of them,<br /> e.g., up to one or two maps per allocation.<br /><br /> The default value is 65536.<br /><br /> $ cat /proc/sys/vm/max_map_count<br /> 65536<br /><br /> Also see http://people.redhat.com/alikins/system_tuning.html<br /><br /><br /><br />TIP 177:<br /><br /> IO Scheduler - /Documentation/block/as-iosched.txt documents kernel settings for disk<br /> performance.<br /><br /> If you're not sure what partitions you have "$ cat /proc/partitions". This example<br /> assumes hda, and you can see some of the kernel settings:<br /><br /> $ ls /sys/block/hda/queue/iosched<br /> back_seek_max back_seek_penalty clear_elapsed fifo_batch_expire fifo_expire_async<br /> fifo_expire_sync find_best_crq key_type quantum queued<br /><br /> References: http://lwn.net/Articles/102505/<br /> http://bhhdoa.org.au/pipermail/ck/2004-September/000961.html<br /><br /><br /><br />TIP 178:<br /><br /> iozone -- getting data on disk performance (http://www.iozone.org/). This is a very<br /> comprehensive package.<br /><br /> $ wget http://www.iozone.org/src/current/iozone3_242.tar<br /> $ tar -xf iozone3_242.tar<br /> $ cd iozone3_242/src/current<br /> $ make linux<br /><br /> At this point you should read the documentation. There is no "make install". You<br /> copy it to each filesystem you want to run this program on. Below are some quick<br /> start commands.<br /><br /> Good comprehensive test.<br /><br /> $ iozone -a<br /><br /> I prefer this for small filesystems. It limits the record size to 10000 and does<br /> the output in operations per second (higher numbers mean faster drive).<br /><br /> $ ./iozone -a -s 10000 -O<br /><br /><br /><br />TIP 179:<br /><br /> history - bash command to get a history of all commands typed. But, here is a way<br /> that you can get date and time listed as well.<br /><br /> $ HISTTIMEFORMAT="%y/%m/%d %T "<br /><br /> Defining the environment variable above give you the date/time info when you<br /> execute history:<br /><br /> $ history<br /> ...<br /> 175 05/06/30 12:51:46 grep '141.162.' mout > mout2<br /> 176 05/06/30 12:51:48 e mout2<br /> 177 05/06/30 12:56:59 ls<br /> 178 05/06/30 12:57:02 ls<br /> 179 05/06/30 12:57:39 ls<br /> 180 05/06/30 12:57:49 ls -l<br /> 181 05/06/30 13:01:10 history<br /> 182 05/06/30 13:01:20 HISTTIMEFORMAT="%y/%m/%d %T "<br /> 183 05/06/30 13:01:23 history<br /> ...<br /><br /><br /><br />TIP 180:<br /><br /> .config - Fedora Core getting the .config to rebuild the kernel. You can find<br /> this file, the ".config" file at the following location:<br /><br /> $ ls "/lib/modules/$(uname -r)/build/.config"<br /><br /> Or, to see the contents<br /><br /> $ cat "/lib/modules/$(uname -r)/build/.config"<br /><br /> This can be important, if you're planning to build your own kernel.<br /><br /><br /><br />TIP 181:<br /><br /> Listing control key settings.<br /><br /> $ stty -a<br /> speed 38400 baud; rows 0; columns 0; line = 0;<br /> intr = ^C; quit = ^\; erase = <undef>; kill = <undef>; eof = ^D; eol = <undef>; eol2 = <undef>; start = ^Q;<br /> stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;<br /> -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts<br /> -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel<br /> opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0<br /> isig icanon iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke<br /><br /><br /><br />TIP 182:<br /><br /> iptables DNAT and SNAT. You have a webserver on 192.168.1.71. When people query this webserver, you want them<br /> to goto 192.168.1.81, with no indication that they are going to another web server. In fact, they always make<br /> their web hits to 192.168.1.71.<br /><br /> The following is the iptables commands:<br /><br /> $ echo 1 > /proc/sys/net/ipv4/ip_forward<br /> $ iptables -t nat -A PREROUTING -d 192.168.1.71 -p tcp --dport 80 -j DNAT --to 192.168.1.81<br /> $ iptables -t nat -A POSTROUTING -d 192.168.1.81 -s 192.168.1.0/24 -p tcp --dport 80 -j SNAT --to 192.168.1.71<br /><br /> Change 192.168.1.0/24 to whatever source you expect the web browser to come in on. Below is the tcpdump showing<br /> all traffic is relayed via 192.168.1.71<br /><br /> [root@closet iptables]# tcpdump -nN port 80<br /><br /> 17:34:58.790398 IP 192.168.1.102.1158 > 192.168.1.71.80: S 3620106373:3620106373(0) win 16384 <mss><br /> 17:34:58.790465 IP 192.168.1.71.1158 > 192.168.1.81.80: S 3620106373:3620106373(0) win 16384 <mss><br /> 17:34:58.790703 IP 192.168.1.81.80 > 192.168.1.71.1158: S 1973665156:1973665156(0) ack 3620106374 win 5840 <mss><br /> 17:34:58.790720 IP 192.168.1.71.80 > 192.168.1.102.1158: S 1973665156:1973665156(0) ack 3620106374 win 5840 <mss><br /> 17:34:58.790951 IP 192.168.1.102.1158 > 192.168.1.71.80: . ack 1 win 17520<br /> 17:34:58.790965 IP 192.168.1.71.1158 > 192.168.1.81.80: . ack 1 win 17520<br /> 17:34:58.791451 IP 192.168.1.102.1158 > 192.168.1.71.80: P 1:327(326) ack 1 win 17520<br /> 17:34:58.791472 IP 192.168.1.71.1158 > 192.168.1.81.80: P 1:327(326) ack 1 win 17520<br /> 17:34:58.791973 IP 192.168.1.81.80 > 192.168.1.71.1158: . ack 327 win 6432<br /><br /> Above the web client is on "192.168.1.102". You can see that the 1st server "192.168.1.71" then goes out to<br /> the 2nd server "192.168.1.81" on the second line. The third line shows the 2nd server "192.168.1.81" responding to<br /> the 1st server, and the forth line passes this data back to the web client "192.168.1.102".<br /><br /> Note: You can save your current iptables setting with the following command:<br /><br /> $ iptables-save > iptables_store<br /><br /> The big advantage is that you can store the counters as well.<br /><br /> $ iptables-save -c > iptables_store_w_cnts<br /><br /> To restore the file, use the following:<br /><br /> $ iptables-restore -c < iptables_store_w_cnts<br /><br /><br /><br />TIP 183:<br /><br /> mailstats - display mail statistics. This file reads data from "/var/log/mail/statistics"<br /><br /> [root@closet ~]# mailstats<br /> Statistics from Sat Jun 25 15:59:52 2005<br /> M msgsfr bytes_from msgsto bytes_to msgsrej msgsdis msgsqur Mailer<br /> 4 1 2K 0 0K 0 0 0 esmtp<br /> 9 0 0K 1 2K 0 0 0 local<br /> =====================================================================<br /> T 1 2K 1 2K 0 0 0<br /> C 1 0 0<br /><br /><br /><br />TIP 184:<br /><br /> Profiling C Applications - Assume you have the following program p1.c:<br /><br /> /* Program p1.c */<br /> #include <stdio.h><br /> #include <stdlib.h><br /> <br /> t1(int i)<br /> {<br /> printf("t1:%d\n", i);<br /> }<br /> <br /> t2(int j)<br /> {<br /> printf("t2:%d\n", j);<br /> }<br /> <br /> int main(void)<br /> {<br /> int i, j;<br /> <br /> for (i = 0; i < 5; ++i) {<br /> t1(i);<br /> for (j = 0; j < 2; ++j) {<br /> t2(j);<br /> }<br /> }<br /> }<br /><br /> Compile the program as follows:<br /><br /> $ gcc -pg -g -o p1 p1.c<br /> $ ./p1<br /> t1:0<br /> t2:0<br /> t2:1<br /> t1:1<br /> t2:0<br /> t2:1<br /> t1:2<br /> t2:0<br /> t2:1<br /> t1:3<br /> t2:0<br /> t2:1<br /> t1:4<br /> t2:0<br /> t2:1<br /><br /> Next, to get the profile graph.<br /><br /> $ gprof -p -b p1<br /> Flat profile:<br /> <br /> Each sample counts as 0.01 seconds.<br /> no time accumulated<br /> <br /> % cumulative self self total<br /> time seconds seconds calls Ts/call Ts/call name<br /> 0.00 0.00 0.00 10 0.00 0.00 t2<br /> 0.00 0.00 0.00 5 0.00 0.00 t1<br /><br /><br /> Above note the 10 calls to t2 and 5 calls to t1.<br /><br /><br /><br />TIP 185:<br /><br /> CDPATH - this is a bash variable like PATH that defines a search path<br /> for the cd command.<br /><br /> Suppose you have the following directory structure:<br /><br /> /home/chirico/stuff<br /> |-- dirA<br /> `-- dirB<br /><br /> Assume you define CDPATH as follows:<br /><br /> CDPATH=/home/chirico/stuff<br /><br /> Now, no matter what directory you are in if you use the cd command below<br /> you will automatically move to "/home/chirico/stuff/dirA".<br /><br /> $ cd dirA<br /><br /> Note you could be in "/etc" and will move directly to "/home/chirico/stuff/dirA".<br /> This command has the same format as PATH - multiple entries are separated by a colon.<br /> If the current directory contain a sub-directory dirA, then, it gets priority.<br /><br /> The following is part of my .bash_profile<br /><br /> CDPATH=/work/cpearls/src/posted_on_sf/:/work/souptonuts/documentation/:/home/chirico/deleteme/<br /> export PATH CVS_RSH EDITOR JAVA_HOME CDPATH<br /><br /><br /><br />TIP 186:<br /><br /> Groups - add groups and users to groups. The following shows how to create the group "share"<br /> and add the user "chirico" to this group. The following should be done as root, and<br /> assumes the account "chirico" already exits.<br /><br /> $ groupadd share<br /> $ usermod -G share chirico<br /><br /> Note the change made to "/etc/group" below:<br /><br /> $ cat /etc/group|grep 'share'<br /> share:x:616:chirico<br /><br /> If the user chirico is currently logged in, he should run the following<br /> command to immediately have group "share" rights. Or, the next time he logs<br /> in he will have access to this group.<br /><br /> $ newgrp share<br /><br /> Reference the following (TIP 6, TIP 167).<br /><br /><br /><br />TIP 187:<br /><br /> oprofile - steps for running oprofile on Fedora.<br /><br /> Step 1:<br /><br /> Find out what version of the kernel you are running.<br /><br /> $ uname -a<br /> Linux closet.squeezel.com 2.6.12-1.1398_FC4 #1 Fri Jul 15 00:52:32 EDT 2005 i686 i686 i386 GNU/Linux<br /><br /><br /> Step 2:<br /><br /> Download the source in a chosen directory. Above, I'm running 2.6.12-1, but I'm going to go for 2.6.12.3, since<br /> it's a little later. You want the signed file as well.<br /><br /> $ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.12.3.tar.gz<br /> $ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.12.3.tar.gz.sign<br /><br /> Now, check the signature.<br /><br /> $ gpg --verify linux-2.6.12.3.tar.gz.sign linux-2.6.12.3.tar.gz<br /><br /><br /> Step 3:<br /><br /> Unpack the file.<br /><br /> $ tar -xzf linux-2.6.12.3.tar.gz<br /> $ cd cd linux-2.6.12.3<br /><br /><br /> Step 4:<br /><br /> Copy the ".config" used to compile your previous kernel. You should find it<br /> in the following direcotry "/lib/modules/$(uname -r)/build/.config".<br /><br /> Copy it to the linux-2.6.12.3 directory.<br /><br /> $ cp "/lib/modules/$(uname -r)/build/.config" .<br /><br /><br /> Step 5:<br /><br /> Run make as follows. It will ask for a few questions on "make oldconfig". The<br /> make installs below will have to be done with root privileges.<br /><br /> $ make oldconfig<br /> $ make bzImage<br /> $ make modules<br /> $ make modules_install<br /> $ make install<br /><br /><br /> Step 6:<br /><br /> Edit the "/boot/grub/grub.conf" and set default = 0 as shown below in this<br /> file.<br /><br /> default=0<br /> timeout=5<br /> splashimage=(hd0,2)/grub/splash.xpm.gz<br /> hiddenmenu<br /> title Fedora Core (2.6.12.3)<br /> root (hd0,2)<br /> kernel /vmlinuz-2.6.12.3 ro root=/dev/VolGroup00/LogVol00 rhgb quiet<br /> initrd /initrd-2.6.12.3.img<br /> title Fedora Core (2.6.12-1.1398_FC4)<br /> root (hd0,2)<br /> kernel /vmlinuz-2.6.12-1.1398_FC4 ro root=/dev/VolGroup00/LogVol00 rhgb quiet<br /> initrd /initrd-2.6.12-1.1398_FC4.img<br /> title Fedora Core (2.6.11-1.1369_FC4)<br /> root (hd0,2)<br /> kernel /vmlinuz-2.6.11-1.1369_FC4 ro root=/dev/VolGroup00/LogVol00 rhgb quiet<br /> initrd /initrd-2.6.11-1.1369_FC4.img<br /> title Other<br /> rootnoverify (hd0,1)<br /> chainloader +1<br /><br /><br /> Step 7:<br /><br /> Shutdown with the restart option.<br /><br /> $ shutdown -r now<br /><br /><br /> Step 8:<br /><br /> Run opcontrol. The commands below are done as root. My kernel was compiled in the following<br /> directory "/home/kernel/linux-2.6.12.3/", so I'll run opcontrol as follows:<br /><br /> $ opcontrol --vmlinux=/home/kernel/linux-2.6.12.3/vmlinux<br /><br /> Now start.<br /><br /> $ opcontrol --start<br /> Using 2.6+ OProfile kernel interface.<br /> Reading module info.<br /> Using log file /var/lib/oprofile/oprofiled.log<br /> Daemon started.<br /> Profiler running.<br /><br /> Shutdown opcontrol.<br /><br /> $ opcontrol --shutdown<br /><br /> Run report.<br /><br /> $ opreport<br /><br /> CPU: CPU with timer interrupt, speed 0 MHz (estimated)<br /> Profiling through timer interrupt<br /> TIMER:0|<br /> samples| %|<br /> ------------------<br /> 156088 99.8746 vmlinux<br /> 60 0.0384 libc-2.3.5.so<br /> 30 0.0192 oprofiled<br /> 23 0.0147 libcrypto.so.0.9.7f<br /> 13 0.0083 bash<br /> 12 0.0077 screen<br /> 10 0.0064 sshd<br /> 9 0.0058 ssh<br /> 6 0.0038 ip_tables<br /> 6 0.0038 libncurses.so.5.4<br /> 5 0.0032 b44<br /> 5 0.0032 ext3<br /> 5 0.0032 ld-2.3.5.so<br /> 4 0.0026 ip_conntrack<br /> 4 0.0026 jbd<br /> 2 0.0013 grep<br /> 1 6.4e-04 libdns.so.20.0.2<br /> 1 6.4e-04 libisc.so.9.1.5<br /><br /><br /> Reference the following for more documentation:<br /> http://oprofile.sourceforge.net/doc/<br /><br /><br /><br /><br />TIP 188:<br /><br /> cyrus-imapd with Postfix using sasldb for authentication. For this example<br /> the server is tape.squeezel.com and the user is chirico.<br /><br /> Step 1:<br /> <br /> $ yum install cyrus-imapd<br /> $ yum install cyrus-imapd-utils<br /> <br /> You need "cyrus-imapd-utils" for cyradm.<br /> <br /> <br /> Step 2:<br /> <br /> Edit /etc/imapd.conf<br /> <br /> configdirectory: /var/lib/imap<br /> partition-default: /var/spool/imap<br /> admins: cyrus<br /> sievedir: /var/lib/imap/sieve<br /> sendmail: /usr/sbin/sendmail<br /> hashimapspool: true<br /> # Chirico Commented the below line<br /> # sasl_pwcheck_method: saslauthd<br /> # Because using sasldb<br /> sasl_pwcheck_method: auxprop<br /> sasl_auxprop_plugin: sasldb<br /> # Chirico end change<br /> sasl_mech_list: PLAIN<br /> tls_cert_file: /usr/share/ssl/certs/cyrus-imapd.pem<br /> tls_key_file: /usr/share/ssl/certs/cyrus-imapd.pem<br /> tls_ca_file: /usr/share/ssl/certs/ca-bundle.crt<br /> <br /> <br /> Step 3:<br /> <br /> Create a user and password:<br /> <br /> $ saslpasswd2 -c -u `postconf -h myhostname` cyrus<br /> $ saslpasswd2 -c -u `postconf -h myhostname` chirico<br /> $ saslpasswd2 -c -u `postconf -h myhostname` allmail<br /> <br /> <br /> This will automatically create the file /etc/sasldb2. But look<br /> at the default rights, assuming you ran saslpasswd2 as root:<br /> <br /> $ ls -l /etc/sasldb2<br /> -rw-r----- 1 root root 12288 Jul 31 09:50 /etc/sasldb2<br /> <br /> We need to correct this in step 4.<br /> <br /> <br /> Step 4:<br /> <br /> $ chown root.mail /etc/sasldb2<br /> $ ls -l /etc/sasldb2<br /> -rw-r----- 1 root mail 12288 Jul 31 09:50 /etc/sasldb2<br /> <br /> <br /> Step 5:<br /> <br /> Update "/etc/postfix/main.cf". Note in /etc/imapd.conf the configdirectory<br /> points to /var/lib/imap, and if I look at this directory I see the<br /> socket directory. However, after staring /etc/init.d/cyrus-imapd there<br /> will be a socket file "/var/lib/imap/socket/lmtp". (See step 6).<br /> <br /> mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp<br /> mailbox_transport = cyrus<br /> <br /> Restart postfix.<br /> <br /> /etc/init.d/postfix restart<br /> <br /> <br /> Step 6:<br /><br /> Start cyrus-imapd and look for the socket file.<br /> <br /> <br /> $ /etc/init.d/cyrus-imapd restart<br /> Shutting down cyrus-imapd: [ OK ]<br /> Starting cyrus-imapd: preparing databases... done. [ OK ]<br /> <br /> Now you should see the lmtp file:<br /> <br /> $ ls -l /var/lib/imap/socket/lmtp<br /> srwxrwxrwx 1 root root 0 Jul 31 10:04 /var/lib/imap/socket/lmtp<br /> <br /> <br /> Step 7:<br /> <br /> Add users. Note, you may have to go back to step 3 to add them to /etc/sasldb2<br /> as well.<br /> <br /> $ su - cyrus<br /> $ cyradm tape.squeezel.com<br /> tape.squeezel.com> cm user.chirico<br /> tape.squeezel.com> quit<br /> <br /> Now got back as root, and check that everything was created correctly.<br /> <br /> $ ls /var/spool/imap/c/user/<br /> total 8<br /> drwx------ 2 cyrus mail 4096 Jul 31 10:21 chirico<br /> <br /> <br /> Step 8:<br /> <br /> Run a mail test. We'll do this as root to the chirico account.<br /> <br /> $ mail -s 'First test' chirico<br /> first test<br /> .<br /> <br /> Now, still as root check the maillog. Normally everything should work.<br /> <br /> $ tail /var/log/maillog<br /> <br /> However, I got the following error below.<br /><br /> Jul 31 10:29:03 tape postfix/cleanup[30124]: AE7CB1B34A4: message-id=<20050731142903.ae7cb1b34a4@tape.squeezel.com><br /> Jul 31 10:29:03 tape postfix/qmgr[30120]: AE7CB1B34A4: from=<root@tape.squeezel.com>, size=315, nrcpt=1 (queue active)<br /> Jul 31 10:29:03 tape pipe[30128]: fatal: pipe_comand: execvp /cyrus/bin/deliver: No such file or directory<br /><br /> If you get a similiar error, you may need to adjust the settting in /etc/postfix/master.cf<br /> <br /> # This is the problem in /etc/postfix/master.cf<br /> cyrus unix - n n - - pipe<br /> user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}<br /> <br /> My deliver file is the following<br /> <br /> $ ls -l /usr/lib/cyrus-imapd/deliver<br /> -rwxr-xr-x 1 root root 846228 Apr 4 18:59 /usr/lib/cyrus-imapd/deliver<br /> <br /> So I need to change my /etc/postfix/master.cf as follows:<br /> <br /> # Fix because by deliver file is under /usr/lib/cyrus-imapd/deliver<br /> cyrus unix - n n - - pipe<br /> user=cyrus argv=/usr/lib/cyrus-imapd/deliver -e -r ${sender} -m ${extension} ${user}<br /> <br /> <br /> If changes were needed, like I had to do, restart postfix<br /> <br /> $ /etc/init.d/postfix restart<br /> <br /> Now, if everything works, you should start to see numbers in the spool directory like "1." and<br /> "2.".<br /> <br /> $ ls -l /var/spool/imap/c/user/chirico/<br /> total 40<br /> -rw------- 1 cyrus mail 545 Jul 31 10:44 1.<br /> -rw------- 1 cyrus mail 547 Jul 31 10:45 2.<br /> -rw------- 1 cyrus mail 1276 Jul 31 10:45 cyrus.cache<br /> -rw------- 1 cyrus mail 153 Jul 31 10:21 cyrus.header<br /> -rw------- 1 cyrus mail 196 Jul 31 10:45 cyrus.index<br /><br /> Step 9:<br /> <br /> Local firewall.<br /> <br /> # imap<br /> iptables -A INPUT -p udp -s 192.168.1.0/24 --dport 143 -j ACCEPT<br /> iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 143 -j ACCEPT<br /><br /> Step 10:<br /> <br /> Configure cyrus-imapd to start for run-level 3 and 5.<br /><br /> # chkconfig --level 35 cyrus-imapd on<br /><br /><br /> HINTS -<br /><br /> Something to watch out for:<br /><br /> Something to watch out for: If a user creates a .forward file in their shell account with the<br /> following entry, then, mail will not get mail relayed to cyrus.<br /><br /> "|exec /usr/bin/procmail"<br /><br /><br /> The /etc/maillog will show something like this:<br /><br /> to=<chirico@squeezel.squeezel.com>, orig_to=<chirico>, relay=local, delay=0,<br /> status=sent (delivered to command: exec /usr/bin/procmail)<br /><br /> Remove the ".forward" file from their home directory and you'll get the following:<br /><br /> to=<chirico@squeezel.squeezel.com>, relay=cyrus, delay=0,<br /> status=sent (squeezel.squeezel.com)<br /><br /><br /> mutt with IMAP? (See TIP 190)<br /><br /><br /><br />TIP 189:<br /><br /> expand - convert tabs to spaces in a file.<br /><br /> $ expand How_to_Linux_and_Open_Source.txt > notabs<br /><br /><br /><br />TIP 190:<br /><br /> mutt with imap - assume you have setup imap (see tip 188). Now how do you configure<br /> your ".muttrc" file to automatically connect, securely to the IMAP server?<br /><br /><br /> <br /> Below is an example of my ".muttrc" file. For this example, assume my password is "S0m3paSSw0r9".<br /><br /> $ cat .muttrc<br /> set spoolfile = "imaps://chirico:S0m3paSSw0r9@squeezel.squeezel.com/<br /> set imap_force_ssl=yes<br /> set certificate_file=~/.mutt/certificates/72d31154.0<br /><br /> Now, you want to copy the certificate as a "file.pem" and run "c_rehash" to convert this<br /> file to a number. See the article. See the following article on how to do this under the<br /> fetchmail section.<br /><br /> http://souptonuts.sourceforge.net/postfix_tutorial.html<br /><br /> This is a quick summary of creating this key.<br /><br /> $ openssl s_client -connect squeezel.squeezel.com:995 -showcerts > file.pem<br /> $ c_rehash ~/.mutt/certificates<br /> <br /><br /><br />TIP 191:<br /><br /> Apache - CGI scripts. There are two ways to enable CGI scripts. The second method is the<br /> prefered method.<br /><br /><br /> First way, the easy way. Look for the "http.conf" file. On Fedora Core, this file can be<br /> found under "/etc/httpd/conf/httpd.conf". Edit this file as follows to make<br /> "http://squeezel.squeezel.com/chirico-cgi/" execute scripts.<br /><br /> ScriptAlias /chirico-cgi/ "/home/chirico/cgi-bin/"<br /><br /><br /> Second way, the better way. Instead of doing the above, make the following change in<br /> "/etc/httpd/conf/httpd.conf".<br /><br /> <directory><br /> Options +ExecCGI<br /> SetHandler chirico-cgi<br /> </directory><br /><br /><br /> Running a test script. Now copy the following test script into the directory "/home/chirico/cgi-bin"<br /> and change the rights to execute for the user running this.<br /><br /> #!/bin/sh<br /> # Save as test.cgi<br /> # chown apache.apache test.cgi<br /> # chmod 700 test.cgi<br /> echo "Content-Type: text/html"<br /> echo<br /> echo "Hello world from user <b>`whoami`</b>! "<br /><br /><br /><br />TIP 192:<br /><br /> Bash - using getopts for your bash scripts.<br /><br /> #!/bin/bash<br /> while getopts "ab:cd:" Option<br /> # b and d take arguments<br /> #<br /> do<br /> case $Option in<br /> a) echo -e "a = $OPTIND";;<br /> b) echo -e "b = $OPTIND $OPTARG";;<br /> c) echo -e "c = $OPTIND";;<br /> d) echo -e "d = $OPTIND $OPTARG";;<br /> esac<br /> done<br /> shift $(($OPTIND - 1))<br /><br /><br /><br />TIP 193:<br /><br /> Sieve - creating sieve recipes with "sieveshell"<br /><br /> The following sieve script put all jefferson.edu mail into the<br /> folder jefferson. This assumes that I have already created the IMP<br /> directory, or mail box (INBOX.jefferson), which can be done in mutt<br /> with the "C" command. Below is an example of finding "jefferson.edu"<br /> anywhere in the header.<br /><br /> # This is a file named jefferson.siv<br /> require ["fileinto"];<br /> if header :contains "Received" "from jefferson.edu" {<br /> fileinto "INBOX.jefferson";<br /> stop;<br /> }<br /><br /> Now, from the command propt execute "sieveshell" with the hostname of the<br /> imap server. My server is squeezel.squeezel.com, so I would execute the<br /> following:<br /><br /> $ sieveshell squeezel.squeezel.com<br /> connecting to squeezel.squeezel.com<br /> Please enter your password:****<br /> > put jefferson.siv<br /> > activate jefferson.siv<br /> > list<br /> jefferson.siv <- active script<br /> > quit<br /><br /> Note the put brings in the script. And you need to activiate it.<br /><br /> You can activate a sieve script for any user on your system if you are<br /> root. This is an example of activating a script for user chirico. Assume<br /> below the root prompt is "#".<br /><br /> # sieveshell -a chirico -u chirico squeezel.squeezel.com<br /><br /> You can also automate everything from a bash script. But note after<br /> the -e the commands, and not a file with the commands, follows within<br /> quotes. This is the script I use for my home system.<br /><br /> #!/bin/bash<br /> sieveshell -a chirico -u chirico -e 'deactivate<br /> delete chirico.siv<br /> put chirico.siv<br /> activate chirico.siv<br /> list<br /> ' squeezel.squeezel.com<br /><br /><br /><br /> References:<br /> http://wiki.fastmail.fm/index.php/SieveRecipes<br /> http://www.cyrusoft.com/sieve/#documents<br /><br /><br /><br />TIP 194:<br /><br /> emacs - editing files remotely with tramp. Tramp comes with the latest version of emacs.<br /> That means if you're using Fedora core 4, with emacs, you have tramp. This is<br /> ideal for editing files on remote computers that do not use emacs.<br /><br /> Edit the ".emacs" file and add the following line:<br /><br /> (require 'tramp)<br /> (setq tramp-default-method "scp")<br /><br /> Now, to edit a file on computer tape.squeezel.com (C-x, C-f) and<br /> enter the following in Find file:<br /><br /> Find file:/chirico@tape.squeezel.com:test.txt<br /><br /><br /> References:<br /><br /> http://savannah.gnu.org/projects/tramp/<br /><br /> <br /><br />TIP 195:<br /><br /> trusted X11 forwarding - running gnome and KDE both on one screen, at the same<br /> time securely. The following assumes gnome is running on the current<br /> computer and "closet.squeezel.com" has KDE<br /><br /> $ ssh -Y closet.squeezel.com<br /> $ startkde<br /><br /> Or assume you want to run gnome on "closet.squeezel.com"<br /><br /> $ ssh -Y closet.squeezel.com<br /> $ gnome-session<br /><br /> By default Fedora Core allows ForwardX11 over ssh. Note you want to use<br /> the -Y option above and NOT -X.<br /><br /> Suppose you want a remote "gnome-session" on ctl-alt-F12. Below is an<br /> example of getting the remote computer closet.squeezel.com, and you<br /> can still have the above configuration.<br /><br /> First you must allow magic cookies for each server connection.<br /> <br /> $ MCOOKIE=$(mcookie)<br /> $ xauth add $(hostname)/unix:1 MIT-MAGIC-COOKIE-1 $MCOOKIE<br /> $ xauth add localhost/unix:1 MIT-MAGIC-COOKIE-1 $MCOOKIE<br /><br /> Again, note that you have to add this for EACH connection. So if you wanted 2 as well<br /><br /> $ MCOOKIE=$(mcookie)<br /> $ xauth add $(hostname)/unix:2 MIT-MAGIC-COOKIE-1 $MCOOKIE<br /> $ xauth add localhost/unix:2 MIT-MAGIC-COOKIE-1 $MCOOKIE<br /><br /> On squeezel.squeezel.com create a new xterm. If :1 is take below<br /> try :2. The vt12 is for switching to ctl-alt-F12.<br /><br /> $ xinit -- :1 vt12<br /><br /> Note, if you do not add the above cookies, you will get the follow error:<br /> <br /> Xlib: connection to ":1.0" refused by server<br /> Xlib: No protocol specified<br /><br /> The screen may be hard to read. At this point ssh -Y to the remote computer.<br /><br /> $ ssh -Y closet.squeezel.com<br /> $ gnome-session<br /><br /> Yes, you will get errors about sound and some custom drivers is the remove<br /> computer has different hardware. After is loads, you can switch back and<br /> forth between session with (ctl-alt-F12) and (ctl-alt-F7)<br /><br /><br /><br />TIP 196:<br /><br /> Suspend ssh session - you have just sshed into a computer "ssh -l user example.com", and you<br /> want to get back to the terminal prompt of the computer you started with. Escapte, by<br /> default with ssh is "~", so enter "~" followed by "ctl-z" to suspend.<br /><br /><br /><br />TIP 197:<br /><br /> Quick way to send a text file<br /><br /> $ sendmail -f mike.chirico@gmail.com mchirico@comcast.net < /etc/fstab<br /><br /> Or you can use mutt and send a binary file<br /><br /> $ mutt -s "Pictures of the Kids" -a kids.jpg chirico@laptop.mchirico.org < text.txt<br /><br /><br /><br />TIP 198:<br /><br /> size - determining the size of the text segment, data segment, and "bss" or uninitialized data segment.<br /><br /> $ size /bin/sh /bin/bash<br /> text data bss dec hex filename<br /> 586946 22444 18784 628174 995ce /bin/sh<br /> 586946 22444 18784 628174 995ce /bin/bash<br /><br /> Note above that "/bin/sh" and "/bin/bash" have equal text,data and bss numbers. It's<br /> highly likely that these are the same programs.<br /><br /> $ ls -l /bin/sh<br /> lrwxrwxrwx 1 root root 4 Jan 14 2005 /bin/sh -> bash<br /><br /> Yep, it's the same program. Here's a further definition of each segment.<br /><br /> Text segment: The machine instructions that the CPU executes. This is usually<br /> read only and sharable.<br /><br /> Data segment: Contains initialized variables in a program. You also know these<br /> as declarations and definitions.<br /><br /> int max = 200;<br /><br /> Uninitialized data segment: Think of this as a declaration only, or data that<br /> is only initialized by the kernel to arithmetic 0 or null pointers<br /> before program execution.<br /><br /> char s[10];<br /><br /><br /><br />TIP 199:<br /><br /> Using the at command.<br /><br /> Below is a simple example if running the ls command at 11:42am that<br /> will send mail -m to the user that executed it.<br /><br /><br /> We'll execute job1 defined as follows and set to be executable.<br /><br /> $ cat ./job1<br /> #!/bin/bash<br /> date >> /tmp/job1<br /><br /> The at command is listed below. For queue "-q" names you can only<br /> specify one letter. Here we're using x. The letter determines the<br /> priority with "a" the highest.<br /><br /> $ at -q x -f ./job1 -m 11:54am<br /> job 3 at 2005-10-04 11:54<br /><br /> Now, if you execute the atq command, you'll get the following.<br /><br /> $ atq<br /> 3 2005-10-04 11:54 x chirico<br /><br /> It's also possible to execute jobs at the command line entering<br /> a ctl-d at the end of the input.<br /><br /> $ at -q x -m 12:08pm<br /> at> ls -l<br /> at> who<br /> at> date<br /> at> ^D<br /><br /><br /> Or for a job to execute 1 minute from now.<br /><br /> $ at -q x -m `date -d '1 minute' +"%H:%M"`<br /> at> ls -l<br /> at> date<br /> <br /><br /> Important points: The atd daemon must be running. To check if<br /> it's running do the following:<br /><br /> $ /etc/init.d/atd status<br /><br /> Also, if there is an /etc/at.allow file, then only users in that<br /> file will be allowed to execute at.<br /><br /> If /etc/at.deny exists but is empty and there is no /etc/at.allow,<br /> then, everyone can execute the at command.<br /><br /><br /><br />TIP 200:<br /><br /> lsusb - command will display all USB buses and all devices connected.<br /><br /> $ lsusb<br /> Bus 005 Device 003: ID 413c:2010 Dell Computer Corp.<br /> Bus 005 Device 002: ID 413c:1003 Dell Computer Corp.<br /> Bus 005 Device 001: ID 0000:0000<br /> Bus 004 Device 001: ID 0000:0000<br /> Bus 003 Device 003: ID 0fc5:1227 Delcom Engineering<br /> Bus 003 Device 002: ID 046d:c016 Logitech, Inc. Optical Mouse<br /> Bus 003 Device 001: ID 0000:0000<br /> Bus 002 Device 001: ID 0000:0000<br /> Bus 001 Device 001: ID 0000:0000<br /><br /><br /><br />TIP 201:<br /><br /> Memory fragmentation - if you suspect workload memory fragmentation issues<br /> and you want to monitor the current state of you system, then, consider<br /> looking at the output from /proc/buddyinfo on recent kernels.<br /><br /> $ cat /proc/buddyinfo<br /> Node 0, zone DMA 541 218 42 2 0 0 0 1 1 1 0<br /> Node 0, zone Normal 2508 2614 52 1 5 5 0 1 1 1 0<br /> Node 0, zone HighMem 0 1 3 0 1 0 0 0 0 0 0<br /><br /> The following definition is taken from ./Documentation/filesystems/proc.txt in the<br /> Linux kernel source.<br /><br /> Each column represents the number of pages of a certain order which are<br /> available. In this case, there are 0 chunks of 2^0*PAGE_SIZE available in<br /> ZONE_DMA, 4 chunks of 2^1*PAGE_SIZE in ZONE_DMA, 101 chunks of 2^4*PAGE_SIZE<br /> available in ZONE_NORMAL, etc...<br /><br /><br /><br />TIP 202:<br /><br /> arp - Linux ARP kernel moduel. This command implements the Address Resolution Protocol.<br /><br /> This is an example of the command.<br /><br /> $ arp<br /> Address HWtype HWaddress Flags Mask Iface<br /> tape.squeezel.com ether 00:50:DA:60:5B:AD C eth0<br /> squeezel.squeezel.com ether 00:11:11:8A:BE:3F C eth0<br /> gw.squeezel.com ether 00:0F:66:47:15:73 C eth0<br /><br /> <br />TIP 203:<br /><br /> dbench - performance monitoring.<br /><br /> So, how does your system react when the load average is above 600. Have you even seen a<br /> computer with a load average of 600? Well, this could be your chance.<br /><br /> Reference: http://freshmeat.net/projects/dbench/<br /><br /> The following gives a load average of 10 on my system.<br /><br /> $ dbench 34<br /><br /> If you want a higher load, just increase the number.<br /><br /><br /><br />TIP 204:<br /><br /> /etc guide - a listing of common files in the /etc directory.<br /><br /> /etc/exports: this file is used to configure NFS.<br /><br /> /etc/ftpusers: the users on your system who are restricted from FTP login.<br /><br /> /etc/motd: message of the day, which users see after login.<br /><br /> /etc/named.conf: DNS config file.<br /><br /> /etc/profile: common user information.<br /><br /> /etc/inittab: this file contains runlevel start information.<br /><br /> /etc/services: the services and their respective ports.<br /><br /> /etc/shells: this contains the names of all shells installed on the system.<br /><br /> /etc/passwd: this file contains user information.<br /><br /> /etc/group: security group rights.<br /><br /><br /> <br />TIP 205:<br /><br /> logger - is a bash command utility for writing to /var/log/messages or the<br /> other files defined in /etc/syslog.conf.<br /><br /> $ logger -t TEST more of a test here<br /><br /> This is what shows up in /var/log/messages<br /><br /> Oct 28 07:15:50 squeezel TEST: more of a test here<br /><br /><br /> <br />TIP 206:<br /><br /> accton, lastcomm - accouting on and last command. This is<br /> a way to monitor users on your system. As root, you<br /> would implement this as follows:<br /><br /> $ accton -h<br /> Usage: accton [-hV] [file]<br /> [--help] [--version]<br /><br /> The system's default process accounting file is /var/account/pacct.<br /><br /> Note the default file location is /var/account/pacct so we'll turn<br /> it on system wide with the following command.<br /><br /> $ accton /var/account/pacct<br /><br /> Now take a look at this file. It will grow. To see command that<br /> are executed, use the lastcomm command.<br /><br /> $ lastcomm<br /><br /> The above command gives output for all users. To get the data<br /> for user "chirico" execute the following command:<br /><br /> $ lastcomm --user chirico<br /><br /> You can also get a summary of commands with sa.<br /><br /> [chirico@big ~]$ sa<br /> 30 5.23re 0.00cp 10185k<br /> 11 4.83re 0.00cp 8961k ***other<br /> 8 0.13re 0.00cp 19744k nagios*<br /> 4 0.00re 0.00cp 2542k automount*<br /> 3 0.00re 0.00cp 680k sa<br /> 2 0.13re 0.00cp 17424k check_ping<br /> 2 0.13re 0.00cp 978k ping<br /><br /> To turn off accounting, execute accton without a filename.<br /><br /> $ accton<br /><br /><br /><br />TIP 207:<br /><br /> CPU Temperature on a laptop. The following is the temperature<br /> of my Dell laptop.<br /><br /> $ cat /proc/acpi/thermal_zone/THM/temperature<br /> temperature: 58 C<br /><br /> <br /><br /><br />TIP 208:<br /><br /> script -f with mkfifo to allow another user to view what you type<br /> in real-time.<br /><br /><br /> Step 1. Create a fifo (first in first out) file that the other<br /> user can view. For this example create the file /tmp/scriptout<br /><br /> [chirico@laptop ~]$ mkfifo /tmp/scriptout<br /><br /> Step 2. Have the second user, voyeur user, cat this file. Output will block<br /> for them until you complete step 3. The other user, voyer,<br /> is executing the command below.<br /><br /> [voyeur@laptop ~]$ cat /tmp/scriptout<br /><br /> Step 3. The original user runs the following command.<br /><br /> [chirico@laptop ~]$ script -f /tmp/scriptout<br /> Script started, file is /tmp/scriptout<br /><br /> Now anything typed, including a vi session, will be displayed to the<br /> voyeur user in step 2.<br /><br /> See TIP 46.<br /><br /><br /><br />TIP 209:<br /><br /> fsck forced on next reboot. To do this, as root issue the following commands.<br /><br /> $ cd /<br /> $ touch forcefsck<br /><br /> Now reboot the system, and when it comes up fsck will be forced on the system.<br /><br /> $ shutdown -r now<br /><br /><br /><br />TIP 210:<br /><br /> /dev/random and /dev/urandom differ in their random generating properties. /dev/random<br /> only returns bytes when enough noise has been generated from the entropy pool. In<br /> contrast /dev/urandom will always return bytes.<br /><br /><br /> Reference: http://sourceforge.net/direct-dl/mchirico/cpearls/simple_but_common.tar.gz (rand.c)<br /><br /><br /><br />TIP 211:<br /><br /> Want to find out the speed of your NIC? (Full Duplex or Half), then use ethtool.<br /><br /> [root@squeezel ~]# ethtool eth0<br /> Settings for eth0:<br /> Supported ports: [ MII ]<br /> Supported link modes: 10baseT/Half 10baseT/Full<br /> 100baseT/Half 100baseT/Full<br /> 1000baseT/Half 1000baseT/Full<br /> Supports auto-negotiation: Yes<br /> Advertised link modes: 10baseT/Half 10baseT/Full<br /> 100baseT/Half 100baseT/Full<br /> 1000baseT/Half 1000baseT/Full<br /> Advertised auto-negotiation: Yes<br /> Speed: 100Mb/s<br /> Duplex: Full<br /> Port: Twisted Pair<br /> PHYAD: 1<br /> Transceiver: internal<br /> Auto-negotiation: on<br /> Supports Wake-on: g<br /> Wake-on: d<br /> Current message level: 0x000000ff (255)<br /> Link detected: yes<br /><br /><br /><br /><br />TIP 212:<br /><br /> rpm install hang? You might need to delete the lock state information.<br /><br /> $ nl /etc/rc.d/rc.sysinit | grep rpm<br /> 720 rm -f /var/lib/rpm/__db* &> /dev/null<br /><br /> Note the command<br /><br /> $ rm -f /var/lib/rpm/__db*<br /><br /> Because sometimes you will run "rpm -ivh somerpm" and it will just sit<br /> there.<br /><br /><br /><br /><br />TIP 213:<br /><br /> Apache - limit access to certain directories based on IP address in the<br /> httpd.conf file.<br /><br /> You can do this completely from /etc/httpd/conf/httpd.conf which<br /> are shown below for multiple IP addresses. Note that all 3 setting<br /> are the same.<br /><br /> 10.0.0.0/255.0.0.0<br /> 10.0.0.0/8<br /> 10<br /><br /> However, the following is different<br /><br /> 10.0.0.0/24 only allows 10.0.0.1 to 10.0.0.254<br /><br /><br /> Some complete settings in /etc/httpd/conf/httpd.conf<br /><br /> <directory><br /> Order allow,deny<br /> Allow from 10.0.0.0/8 # All 10.<br /> Allow from 192.168.0.0/16 # All 192.168<br /> Allow from 127 # All 127.<br /> </directory><br /><br /><br /> Here's an example that only allows access to .html files<br /> and nothing else for a particular directory.<br /><br /> <directory><br /> Satisfy All<br /> Order allow,deny<br /> Deny from all<br /> <files><br /> Order deny,allow<br /> Allow from all<br /> Satisfy Any<br /> </files><br /> </directory><br /><br /> Don't forget to reload httpd with the following command.<br /> <br /> $ /etc/init.d/httpd reload<br /><br /> <br /><br />TIP 214:<br /><br /> Open Files - determining how many files are currently open.<br /><br /> $ cat /proc/sys/fs/file-nr<br /> 2030 263 104851<br /> | | \- maximum open file descriptors<br /> | | <br /> | \- total free allocated file descriptors<br /> |<br /> (Total allocated file descriptors since boot)<br /><br /> Note the maximum number can be set or changed.<br /><br /> $ cat /proc/sys/fs/file-max<br /> 104851<br /><br /> To change this<br /><br /> $ echo "804854" > /proc/sys/fs/file-max<br /><br /> Note lsof | wc -l will report higher numbers because this includes<br /> open files that are not using file descriptors such as directories,<br /> memory mapped files, and executable text files.<br /><br /> (Reference http://www.netadmintools.com/art295.html<br /> and also see the man page for this: man 5 proc )<br /><br /><br /><br />TIP 215:<br /><br /> Ctrl-Alt-Del will cause an immediate reboot, without syncing dirty buffers by<br /> setting the value > 0 in /proc/sys/kernel/ctrl-alt-del.<br /><br /> $ echo 1 > /proc/sys/kernel/ctrl-alt-del<br /> <br /><br /> (Reference: man 5 proc)<br /><br /><br /><br /><br />TIP 216:<br /><br /> Redefining keys in X using xev and xmodmap. The program xev, used in an X window<br /> terminal screen will display information on mouse movements, keys pressed and<br /> released.<br /><br /> $ xev<br /><br /> Now type shift-4 and you'll notice the event details below:<br /><br /> KeyPress event, serial 29, synthetic NO, window 0x3800001,<br /> root 0x60, subw 0x0, time 55307049, (418,242), root:(428,339),<br /> state 0x1, keycode 13 (keysym 0x24, dollar), same_screen YES,<br /> XLookupString gives 1 bytes: (24) "$"<br /> XmbLookupString gives 1 bytes: (24) "$"<br /> XFilterEvent returns: False<br /> <br /> KeyRelease event, serial 29, synthetic NO, window 0x3800001,<br /> root 0x60, subw 0x0, time 55307184, (418,242), root:(428,339),<br /> state 0x1, keycode 13 (keysym 0x24, dollar), same_screen YES,<br /> XLookupString gives 1 bytes: (24) "$"<br /> <br /> So, if you want to redefine this key to say copyright, see (/usr/X11R6/include/X11/keysymdef.h)<br /> you would type the following.<br /><br /> $ xmodmap -e 'keycode 13 = 4 copyright'<br /><br /> To get the key back to the dollar, issue the following command.<br /><br /> $ xmodmap -e 'keycode 13 = 4 dollar'<br /><br /> By the way it's possible to define multiple key codes for a sigle key. You'll need<br /> to have a key defined as the Mode_switch. Perhaps you'd like to use the Windows key,<br /> or the key with the Microsoft logo on it, since you're using Linux. This key is<br /> keycode 115<br /><br /> $ xmodmap -e 'keycode 115 = Mode_switch'<br /><br /> Now you could define 3 values to the shift-4. For this example use ld, Yen and dollar.<br /><br /> $ xmodmap -e 'keycode 13 = 4 dollar sterling yen'<br /><br /> So pressing the keys gives you the following:<br /><br /> shift-$ (dollar sign)<br /> Windows-$ (lb sign)<br /> Windows-shift-$ (Yen sign)<br /><br /> You could go crazy and redefine all you keys.<br /><br /> (Thanks to hisham for this tip).<br /><br /><br /><br />TIP 217:<br /><br /> Threads - which version of threads are you using?<br /><br /> $ getconf GNU_LIBPTHREAD_VERSION<br /> NPTL 2.3.90<br /><br /> For a history on threads used with gcc reference the following:<br /> <br /> http://en.wikipedia.org/wiki/NPTL<br /><br /><br /><br /><br />TIP 218:<br /><br /> Screenshots using ImageMagick.<br /><br /> If you want the entire screen, execute the following:<br /><br /> $ import -window root screen.png<br /><br /> Or to crosshair select the region with your mouse, execute<br /> the following instead.<br /><br /> $ import screen.png<br /><br /> KDE has the ability to take screenshots with the command below.<br /><br /> $ ksnapshot<br /><br /> GNOME likewise has a command too.<br /><br /> $ gnome-panel-screenshot --delay 6<br /><br /><br /><br /> Visting ImageMagick again, the xwininfo command give window information and the id can be<br /> used to capture images with the import command.<br /><br /> $ xwininfo<br /><br /> xwininfo: Please select the window about which you<br /> would like information by clicking the<br /> mouse in that window.<br /><br /> xwininfo: Window id: 0x1e00007 "chirico@squeezel:/work/svn/souptonuts - Shell - Konsole"<br /><br /> Absolute upper-left X: 4<br /> Absolute upper-left Y: 21<br /> Relative upper-left X: 0<br /> Relative upper-left Y: 0<br /> Width: 880<br /> Height: 510<br /> Depth: 24<br /> Visual Class: TrueColor<br /> Border width: 0<br /> Class: InputOutput<br /> Colormap: 0x20 (installed)<br /> Bit Gravity State: NorthWestGravity<br /> Window Gravity State: NorthWestGravity<br /> Backing Store State: NotUseful<br /> Save Under State: no<br /> Map State: IsViewable<br /> Override Redirect State: no<br /> Corners: +4+21 -396+21 -396-493 +4-493<br /> -geometry 880x510+0+0<br /><br /> Now use the import command with the Window id. My example is shown below.<br /><br /> $ import -window 0x1e00007 id.miff<br /><br /> And to quickly display this image that you just saved, use the display command.<br /><br /> $ display id.miff<br /><br /><br /><br />TIP 219:<br /><br /> File Access over SSH using FUSE (Filesystem in USErspace). This is a very good way to<br /> mount a remote filesystem locally. It's like a secure NFS mount, but you don't require<br /> admin privileges on the remote computer. You do need to have fuse-sshfs installed on<br /> the local computer that will perform the filesystem mount.<br /><br /> The following works with Fedora Core 5. Only the users added to the fuse group can mout<br /> external drives. Below the user chirico is being added to the group fuse.<br /><br /> $ yum install fuse-sshfs<br /> $ usermod -a -G fuse chirico<br /> <br /> You'll need to reboot.<br /><br /> $ shutdown -r now<br /><br /> <br /> Next I'm going to mount the remote filesystem v0.squeezel.com. This is done as user chirico<br /> on the local computer. I'm using root on the remote computer v0.squeezel.com because I<br /> want to mount the complete drive.<br /><br /> $ mkdir v0<br /> $ sshfs root@v0.squeezel.com:/ v0<br /> $ cd v0<br /> $ ls -l<br /> bin dev home lost+found media mnt opt q sbin srv tmp var<br /> boot etc lib master_backup misc net proc root selinux sys usr<br /> <br /><br /> Now to unmount the filesystem<br /><br /> $ fusermount -u /home/chirico/v0<br /><br /> Yes, you can mount the filesystem on boot. Below shows an example entry for /etc/fstab, but<br /> this only allows user on the current system to view what is is /mnt/v0.<br /><br /> sshfs#root@v0.squeezel.com:/var/log /mnt/v0 fuse defaults 0 0<br /><br /> References:<br /> (http://fuse.sourceforge.net/sshfs.html)<br /><br /><br /><br /><br />TIP 220:<br /><br /> OpenVPN - A full-featured SSL VPN solution. The following demonstrates<br /> a very simple OpenVPN setup between two Fedora Core 5 computers<br /> big.squeezel.com 192.168.1.12 and tape.squeezel.com 192.168.1.155<br /><br /> As root install the package on both computers.<br /><br /> $ yum -y install openvpn<br /><br /><br /> Setup on big.squeezel.com 192.168.1.12<br /><br /> $ iptables -A INPUT -p udp -s 192.168.1.155 --dport 1194 -j ACCEPT<br /> $ iptables -A INPUT -i tun+ -j ACCEPT<br /> $ iptables -A INPUT -i tap+ -j ACCEPT<br /> $ iptables -A INPUT -i tap+ -j ACCEPT<br /> $ iptables -A FORWARD -i tap+ -j ACCEPT<br /><br /> Note - make sure you have commented out the following line<br /> in /etc/sysconfig/iptables<br /><br /> # -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited<br /><br /> Now from continuting with the commands that need to be executed on<br /> big.squeezel.com 192.168.1.12 do one of the following<br /> <br /> $ openvpn --remote tape.squeezel.com --dev tun1 --ifconfig 10.4.0.1 10.4.0.2 --verb 9<br /><br /> The above statement gives lots of errors. Once it's working you may want<br /> the following statement without the --verb 9 option.<br /><br /> $ openvpn --remote tape.squeezel.com --dev tun1 --ifconfig 10.4.0.1 10.4.0.2<br /><br /> After you finish the setup commands for tape.squeezel.com immediately below, you'll be<br /> able to access tape.squeezel.com as 10.4.0.2.<br /><br /><br /> Setup on tape.squeezel.com 192.168.1.155<br /><br /> $ iptables -A INPUT -p udp -s 192.168.1.12 --dport 1194 -j ACCEPT<br /> $ iptables -A INPUT -i tun+ -j ACCEPT<br /> $ iptables -A INPUT -i tap+ -j ACCEPT<br /> $ iptables -A INPUT -i tap+ -j ACCEPT<br /> $ iptables -A FORWARD -i tap+ -j ACCEPT<br /><br /> Note - again, make sure you have commented out the following line<br /> in /etc/sysconfig/iptables<br /><br /> # -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited<br /><br /> The openvpn commands are tape.squeezel.com are reversed from what is shown<br /> above.<br /><br /> $ openvpn --remote big.squeezel.com --dev tun1 --ifconfig 10.4.0.2 10.4.0.1 --verb 9<br /><br /> Or<br /> $ openvpn --remote big.squeezel.com --dev tun1 --ifconfig 10.4.0.2 10.4.0.1<br /><br /><br /> Now you can access all services and ports from big.squeezel.com on 10.4.0.1 for<br /> such services as MySQL, secure Web, imap, etc. A quick test is nmap as follows:<br /><br /> $ nmap -A -T4 10.4.0.1<br /><br /> Starting Nmap 4.03 ( http://www.insecure.org/nmap/ ) at 2006-05-20 13:54 EDT<br /> Interesting ports on 10.4.0.1:<br /> (The 1671 ports scanned but not shown below are in state: closed)<br /> PORT STATE SERVICE VERSION<br /> 22/tcp open ssh OpenSSH 4.3 (protocol 2.0)<br /> 111/tcp open rpcbind 2 (rpc #100000)<br /> 3306/tcp open mysql MySQL (unauthorized)<br /><br /> Nmap finished: 1 IP address (1 host up) scanned in 7.116 seconds<br /><br /><br /><br />TIP 221:<br /><br /> openssl - Some common commands.<br /><br /> Finding the openssldir (Directory for OpenSSL files).<br /><br /> $ openssl version -a|grep OPENSSLDIR<br /> OPENSSLDIR: "/etc/pki/tls"<br /><br /> Connect to a secure SMTP server with STARTTLS, assuming the server name is<br /> squeezel.squeezel.com<br /><br /> $ openssl s_client -connect squeezel.squeezel.com:25 -starttls<br /><br /> <br /><br /> Reference (http://www.madboa.com/geek/openssl/)<br /><br /><br /><br />TIP 222:<br /><br /> Bash functions. This is easy, and I find it very useful to create bash functions<br /> for repeated commands. For example, suppose you want to create a quick bash function<br /> to cd to /var/log, tail messages and tail secure. You can create this function as<br /> follows:<br /><br /> [root@v5 log]# m()<br /> m()<br /> > { cd /var/log<br /> { cd /var/log<br /> > tail messages<br /> tail messages<br /> > tail secure<br /> tail secure<br /> > }<br /> }<br /> <br /> Above I'm typing m() then hitting return. Note the echo on the next line followed<br /> by the prompt >. I then enter {.<br /><br /><br /><br />TIP 223:<br /><br /> Stats on DNS Server. You can get stats on your DNS server.<br /><br /> The following works for BIND 9:<br /><br /> $ rndc stats<br /><br /> On my system I see the output in "/var/named/chroot/var/named/data/named_stats.txt", which<br /> if an FC4 system. By the way, if you're using BIND 8, the command is "ndc stats", but that<br /> has a completely different format.<br /><br /><br /> Format of the output<br /><br /> +++ Statistics Dump +++ (1153791199)<br /> success 297621<br /> referral 32<br /> nxrrset 21953<br /> nxdomain 33742<br /> recursion 28243<br /> failure 54<br /> --- Statistics Dump --- (1153791199)<br /><br /> The number (1153791199) can be converted with the date command.<br /><br /> $ date -d '1970-01-01 1153791199 sec'<br /> Tue Jul 25 02:33:19 EDT 2006<br /><br /> That's 1153791199 seconds since 1970-01-01 UCT. Which is 4 hours fast,<br /> from EDT.<br /><br /><br /><br />TIP 224:<br /><br /> snmp - simple network monitoring protocol. The following steps setup snmp on Fedora Core 5.<br /><br /> $ yum install net-snmp*<br /><br /> Next add the following line in "/etc/snmp/snmpd.conf" at the bottom.<br /><br /> rocommunity pA33worD<br /><br /> Start the snmp service.<br /><br /> $ /etc/init.d/snmpd restart<br /><br /> Once started, from the command prompt, it's possible to get stats on the computer.<br /><br /> $ snmpwalk -v 1 -c pA33worD localhost system<br /> Or<br /> $ snmpwalk -v 1 -c pA33worD localhost interface<br /><br /> Or<br /> $ snmpgetnext -v 1 -c pA33worD localhost sysUpTime<br /> DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (26452) 0:04:24.52<br /><br /> Note the Timeticks is in 100th of a second. So the computer above has been running<br /> for 264.52 seconds.<br /><br /> Reference( TIP 225 shows how to use MRTG for gathering snmp stats).<br /> http://www.net-snmp.org/tutorial/tutorial-5/commands/snmpv3.html<br /><br /><br />TIP 225:<br /><br /> MRTG - Multi Router Traffic Grapher. <br /><br /> $ cfgmaker --output=/etc/mrtg/v5.squeezel.com \<br /> ifref=ip --global "workdir:/var/www/html/mrtg/stats"\<br /> pA33worD@v5.squeezel.com<br /><br /> Reference: http://www.chinalinuxpub.com/doc/www.siliconvalleyccie.com/linux-hn/mrtg.htm<br /><br /><br /><br />TIP 226:<br /><br /> Back Trace - This is a method of getting a back trace for all processes on the system.<br /> it assumes the following: a. Kernel was build with CONFIG_MAGIC_SYS-REQ<br /> enabled (which Fedora 5 kernels are) b. You can get direct access to the<br /> monitor.<br /><br /> Step 1.<br /><br /> Ctl-Alt-F1 (This brings you to the text console)<br /><br /> Step 2.<br /><br /> Alt-ScrollLock<br /> Ctl-ScrollLock<br /> <br /> Note above that's Alt-ScrollLock followed by Ctl-ScrollLock. You should see<br /> a lot of text on the screen. To fast to read, but don't worry the text will<br /> be in /var/log/messages at the end.<br /><br /> On my system the ScrollLock key is next to the NumLock key.<br /><br /><br /><br /><br />TIP 227:<br /><br /> Ext3 Tuning - One advantage of Ext3 over Ext2 is directory indexing, which imporves file<br /> access in directories containing large files or when the directory contains<br /> many files. Directory indexing improves performance by using hashed binary<br /> trees.<br /><br /> There are two ways to enable dir_index. First, find the device using the mount<br /> command.<br /><br /> $ mount<br /><br /> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)<br /> proc on /proc type proc (rw)<br /> sysfs on /sys type sysfs (rw)<br /> devpts on /dev/pts type devpts (rw,gid=5,mode=620)<br /> /dev/sda1 on /boot type ext3 (rw) <--- This is the one you want<br /> tmpfs on /dev/shm type tmpfs (rw)<br /> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)<br /> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)<br /> automount(pid2001) on /net type autofs (rw,fd=4,pgrp=2001,minproto=2,maxproto=4) <br /><br /><br /> From the above command, the device used is /dev/sda1. Using the tune2fs command,<br /> directory indexing will only apply to directories created after running the<br /> command below. <br /><br /> $ tune2fs -O dir_index /dev/sda1<br /><br /> However, if you want it to apply to all directories, use the e2fsck command as <br /> shown below:<br /><br /> $ e2fsck -D -f /dev/sda1<br /><br /> You'll need to bypass the warning message.<br /><br /> <br /> Reference: "Tuning Journaling File Systems: A small amount of effort an dtime can yield big<br /> results",by Steve Best. Linux Magazine, September 10, 2006. This author as has<br /> a very good book titled: "Linux Debugging and Performance Tuning."<br /><br /><br /><br />TIP 228:<br /><br /> NIC bonding - binding two or more NICs to one IP address to improve performance. The following<br /> instructions were done on Fedora Core 5.<br /><br /> Step 1.<br /> <br /> Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown<br /> below is my file.<br /><br /> $ cat /etc/sysconfig/network-scripts/ifcfg-bond0<br /><br /> DEVICE=bond0<br /> IPADDR=192.168.1.12<br /> NETMASK=255.255.255.0<br /> GATEWAY=192.168.1.1<br /> USERCTL=no<br /> BOOTPROTO=none<br /> ONBOOT=yes<br /><br /> Step 2.<br /><br /> Modify eth0, eth1 and eth2. Shown below are each one of my files. Note that<br /> you must comment out, or remove the ip address, netmask, gateway and hardware<br /> address from each one of these files, since settings should only come from<br /> the ifcfg-bond0 file above. I've chosen to comment out the lines, instead of<br /> removing, should I decide to unbond my NICS sometime in the future.<br /><br /> $ cat /etc/sysconfig/network-scripts/ifcfg-eth0<br /><br /> # Linksys Gigabit Network Adapter<br /> DEVICE=eth0<br /> BOOTPROTO=none<br /> #HWADDR=00:12:17:5C:A7:9D<br /> #IPADDR=192.168.1.12<br /> #NETMASK=255.255.255.0<br /> #TYPE=Ethernet<br /> #GATEWAY=192.168.1.1<br /> #USERCTL=no<br /> #IPV6INIT=no<br /> #PEERDNS=yes<br /> ONBOOT=yes<br /> # Settings for Bond<br /> MASTER=bond0<br /> SLAVE=yes<br /><br /><br /> $ cat /etc/sysconfig/network-scripts/ifcfg-eth1<br /><br /> # Linksys Gigabit Network Adapter<br /> DEVICE=eth1<br /> BOOTPROTO=none<br /> #HWADDR=00:12:17:5C:A7:C9<br /> #IPADDR=192.168.1.13<br /> #NETMASK=255.255.255.0<br /> ONBOOT=yes<br /> #TYPE=Ethernet<br /> USERCTL=no<br /> #IPV6INIT=no<br /> #PEERDNS=yes<br /> #<br /> # Settings for bonding<br /> MASTER=bond0<br /> SLAVE=yes<br /><br /><br /> $ cat /etc/sysconfig/network-scripts/ifcfg-eth2<br /><br /> # Linksys Gigabit Network Adapter<br /> DEVICE=eth2<br /> BOOTPROTO=none<br /> #HWADDR=00:12:17:5C:A7:9D<br /> #IPADDR=192.168.1.12<br /> #NETMASK=255.255.255.0<br /> ONBOOT=yes<br /> #TYPE=Ethernet<br /> #GATEWAY=192.168.1.1<br /> #USERCTL=no<br /> #IPV6INIT=no<br /> #PEERDNS=yes<br /> MASTER=bond0<br /> SLAVE=yes<br /><br /> Step 3.<br /><br /> Set the load parameters for bond0 bonding kernel module. Append the<br /> following lines to /etc/modprobe.conf<br /><br /> # bonding commands<br /> alias bond0 bonding<br /> options bond0 mode=balance-alb miimon=100<br /><br /><br /> Step 4.<br /><br /> Load the bond driver module from the command prompt.<br /><br /> $ modprobe bonding<br /><br /><br /> Step 5.<br /><br /> Restart the network, or restart the computer. Note I restarted to computer,<br /> since my NICs above had MAC assignments.<br /><br /> $ service network restart # Or restart computer<br /><br /> Take a look at the proc settings.<br /><br /> $ cat /proc/net/bonding/bond0<br /> Ethernet Channel Bonding Driver: v3.0.3 (March 23, 2006)<br /><br /> Bonding Mode: adaptive load balancing<br /> Primary Slave: None<br /> Currently Active Slave: eth2<br /> MII Status: up<br /> MII Polling Interval (ms): 100<br /> Up Delay (ms): 0<br /> Down Delay (ms): 0<br /><br /> Slave Interface: eth2<br /> MII Status: up<br /> Link Failure Count: 0<br /> Permanent HW addr: 00:13:72:80:62:f0<br /> <br /> References:<br /><br /> http://www.cyberciti.biz/nixcraft/vivek/blogger/2006/04/linux-bond-or-team-multiple-network.php<br /> Good, well written article describing the steps above.<br /><br /> http://sourceforge.net/project/showfiles.php?group_id=24692&package_id=146474<br /> Documentation for bonding that can also be found in the kernel<br /> ./Documentation/networking/bonding.txt<br /> <br /><br /><br />TIP 229:<br /><br /> /etc/nsswitch.conf - System Databases and Name Service Switch configuration file.<br /><br /> This file determines lookup order of services. For example, to match a name<br /> to an IP address, an entry can be put into the /etc/hosts file. Or a DNS query<br /> can be made. What's the order? Normally, it's the entry in the /etc/hosts file.<br /> because /etc/nsswitch.conf contains the following setting<br /> <br /> hosts: files dns<br /><br /><br /> See man nsswitch.conf for more settings.<br /><br /><br /><br />TIP 230:<br /><br /> Finding DST settings on the live system. In 2007 Daylight Saving Time was extended in the United<br /> States, Canada, and Bermuda. Before this change we adjusted the clocks on the last Sunday in<br /> October - Not anymore. We now change it on the first Sunday in November.<br /><br /> $ zdump -v EST5EDT |grep '2007'<br /><br /> EST5EDT Sun Mar 11 06:59:59 2007 UTC = Sun Mar 11 01:59:59 2007 EST isdst=0 gmtoff=-18000<br /> EST5EDT Sun Mar 11 07:00:00 2007 UTC = Sun Mar 11 03:00:00 2007 EDT isdst=1 gmtoff=-14400<br /> EST5EDT Sun Nov 4 05:59:59 2007 UTC = Sun Nov 4 01:59:59 2007 EDT isdst=1 gmtoff=-14400<br /> EST5EDT Sun Nov 4 06:00:00 2007 UTC = Sun Nov 4 01:00:00 2007 EST isdst=0 gmtoff=-18000<br /><br /> Correct settings for EDT are shown above. Note, the months Mar and Nov.<br /><br /> You can also run the same command by location.<br /><br /> $ zdump -v /usr/share/zoneinfo/America/New_York|grep '2007'<br /><br /> Note: This time conversion file can be created manually. For instructions on how to perform<br /> this task, execute the following command.<br /><br /> $ man zic<br /><br /> zic is the time zone compiler.<br /><br /> Reference:<br /> http://www-1.ibm.com/support/docview.wss?rs=0&q1=T1010301&uid=isg3T1010301&loc=en_US&cs=utf-8&cc=us〈=en<br /><br /><br /><br />TIP 231:<br /><br /> Qt - Compiling Qt 4 programs statically to run on remote systems that do<br /> have Qt 4 libraries installed. You actually download the Qt 4 source<br /> program.<br /><br /><br /> Step 1 - Download Qt 4.<br /><br /> You will download a separate version of Qt 4. Yes, even if you have<br /> Qt 4 installed on your system, you'll want to download another<br /> version to statically compile your programs. I performed the<br /> following steps on my computer:<br /><br /> $ mkdir -p /home/src/qt<br /> $ wget ftp://ftp.trolltech.com/qt/source/qt-x11-opensource-src-4.2.2.tar.gz<br /> $ cd /home/src/qt<br /> $ tar -xzf qt-x11-opensource-src-4.2.2.tar.gz<br /><br /> Note, make sure you get the latest version of Qt. When I'm wrote this it<br /> was 4.2.2. Check for updates.<br /><br /><br /> Step 2 - Compile Qt for static mode<br /><br /> The text step is to compile qt for static mode.<br /><br /> $ cd /home/src/qt/qt-x11-opensource-src-4.2.2<br /> $ ./configure -static -prefix /home/src/qt/qt-x11-opensource-src-4.2.2<br /> $ make sub-src<br /><br /> At this point Qt 4 is installed in static mode.<br /><br /><br /> Step 3 - Set PATH<br /><br /> Now set the PATH to reference this version.<br /><br /> $ PATH=/home/src/qt/qt-x11-opensource-src-4.2.2/bin:$PATH<br /> $ export PATH<br /><br /><br /> Step 4 - Compile Your Source<br /><br /> My program source is located in /home/chirico/widgetpaint<br /><br /> $ cd /home/chirico/widgetpaint<br /> $ qmake -project<br /> $ qmake -config release<br /> $ make<br /><br /> <br /> <br />TIP 232:<br /><br /> SELinux - FC6 quick fix for problems. Using system-config-securitylevel to<br /> fix simple problem.<br /><br /> $ ssh -Y user@servertofix<br /> $ system-config-securitylevel<br /><br /> You do not have to ssh into the computer as root. As long as X is running <br /> "init 5", then you can run the system-config command above and it will<br /> ask you for the root password.<br /><br /><br /><br /><br />TIP 233:<br /><br /> Mutt - tagging multiple messages and moving them to a different folder.<br /><br /> If you want to tag multiple messages with mutt, use the capital T, when<br /> in mutt.<br /><br /> T<br /> ~A (To tag all messages. Note, enter the tilda "~" without quotes)<br /> ;s (After entering ;s, you'll be asked where to save the message)<br /><br /> From here you can create a new fold. If you're using IMAP mail boxes, then<br /> use C to create a mailbox.<br /><br /> To delete messages without exiting mutt, enter "$", without the quotes.<br /><br /> (Reference: http://www.mutt.org/doc/manual/manual-4.html )<br /><br /><br /><br />TIP 234:<br /><br /> Mutt - color coding message in mutt.<br /><br /> The following is written in the .muttrc file.<br /><br /> color index brightblue default Poker<br /> color body brightyellow default Error<br /><br /> Note, the first line will color blue all indexes with<br /> the word Poker. The second operates on the body of the<br /> message.<br /><br /><br />TIP 235:<br /><br /> cat - header, stdin, and footer. (Working with /dev/fd/0 or -)<br /><br /> If you have data from a command that you want preceded by<br /> the contents of a header file and followed by data in<br /> a footer file, then, the following command may help.<br /><br /> $ w|cat header /dev/fd/0 footer<br /><br /> Above the output of the "w" command follows the contents of<br /> the header file. Note "/dev/fd/0" refers to stdin. Yes, you<br /> could use "-" in its place in this situation. However, if<br /> "-" is used as the first argument, it will be interpreted as<br /> as a command line option, whereas "/dev/fd/0" would not.<br /><br /><br /><br /></pre>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0tag:blogger.com,1999:blog-4432468942521109730.post-49896873283865769612007-05-03T23:01:00.000-07:002007-05-03T23:04:00.398-07:00X11 Forwarding using SSH<p> To start this setup, you need an additional piece of information. First, you must have your SSH package installed. In Linux, they are the OpenSSH packages. Check your distribution to decide what package you need to install (some installed it as standard packages). Secondly, you need a Windows SSH Client (other OS version, like MAC, are also available). I recommend PuTTY. It is a wonderful free SSH client and you can download them from <a href="http://www.chiark.greenend.org.uk/%7Esgtatham/putty/" target="_top">this link</a>. Remember to download the document and read them carefully. The other good free SSH clients are: Tera Term Pro + TTSSH: An SSH Extension to Tera Term, SSH Secure Shell Client by SSH.com (only free for non-commercial use). I will break down again into steps, so it is easy for you to follow. </p><ol type="1"><li><p> Open up the command <b class="COMMAND">putty.exe</b> by double-click it. It will brings up the interface. First, setup the connection info in Host Name (or use IP) field and select SSH (SSH is using port 22). In Connection Category, find the Connection tree. In SSH, expand it and you will see Tunnels window. Click "Enable X11 forwarding". It is setting the default to X display at "localhost:0". Now, go back to Session and save this session with a name you like. I normally use the Host Name to make me easily remember where I am connecting to. </p></li><li><p> In the example of Hummingbird Exceed, this is what you need to do. (For other X client, the setup is similar). Open up the Xconfig from your Exceed folder. In your "Screen Definition", change to "Multiple" Window mode and save it. Next, open up your "Communication" icon and set the Startup mode to "Passive". </p></li><li><p> Now you are done. To test it, first using PuTTY (or other SSH client) to connect to your server. The first time connection, it will ask you whether you want to cache the Security Key or not. (Yes is normal choice). Once log in is done, fire up your Exceed. It will stay in the background. Now you can execute any of your X application and it should forward the X application via SSH to your local screen. For example: <table bg="" style="color: rgb(224, 224, 224);" border="1" width="90%"><tbody><tr><td><span style="color: rgb(0, 0, 0);"><pre class="SCREEN">$ xclock &</pre></span></td></tr></tbody></table> </p><p> We should now see the Xclock is running on your local screen. </p></li></ol><p> Now you see the difference is that you do not see all your X Window. You are simply running X application one by one and forwarding via SSH to your local screen. Therefore, you need to know the command for running each X application. All the control are done via SSH client window. To me, the security is worthy than the slightly inconvenience! </p>Praveenhttp://www.blogger.com/profile/03454611536899060975noreply@blogger.com0