Kibler Electronics
Industrial Support Since 1975

  • HOME
  • Consulting Services
  • Computer Corner II
  • TCJ Support
  • Software Projects
  • Other Topics and Links
  • BOBZ Products
  • MyForth
  • LAMP - Linux, Apache, MySQL, Perl

    This years buzz words are "virtual" and "LAMP." The virtual refers to things like XEN and VMWare of which I can say works wonderful. If you haven't started using one of the many virtual systems to test product developement, or just play with new software, your really missing a great toolset. However this page is about my using LAMP to do embedded data collection. LAMP stands for using a Linux host that is running Apache web server, with a MYSQL backend data system that is being controlled by using PERL, or PHP, and in some cases even PYTHON. Mine will be in PERL for now, since I have plenty of little tools I have developed over the years to show just how little coding it takes to create, edit, and update both SQL databases and web pages.

    I have amassed a rather large number of data points from my temperature collecting system. There are eight data points collected each hour, and I have been doing this for almost two years now. The data represents simulated solar collectors, with inside and outside temperatures. The concept is to show how bad west facing windows and walls are in Northern California. I wanted to show as well the amount of hot air one could get with a south facing hot air collector. The eight sensors are arranged in such a way as to hopefully provide useful data.

    There are two steps that I needed to do - one is to put the data into a data base, so it can be searched and sorted - the other is to provide a means of pulling out the desired data and provide the results in both table and graph forms on a web site. The selection process should be at the whim of the page viewer. This of course is where the perl scripts and tools come into play.

    I will do a simple one up data dump into the sql data base to pre-load the data. Next will be modifying the collection tools to add data to the sql database instead of the flat file now being used. Follow that with some cgi perl scripts to generate a page and provide a method of interaction with page viewers. Since I have been doing this type of work for many years now, it is only going through my collection of tools and adjusting them for the current tasks.

    The overall system setup

    We live on ten acres a few miles outside of town, and AT&T finds runing DSL service to those not in a densely populated area, too much to bother with. So we use a WiFi service that provides us with a 1.5MB service using Motorola Canopy units. We have had a few minor issues with trees growing up and blocking the service, but generally we are happy and it works reliably. The interface from that service goes through a firewall, which orginally was IPCOP, but the power consumption and noise (not to mention the concern of keeping a 10 year old 586 running), is now been upgrade to a $70 firewall brick. These low noise and minimum power users generally come with both a DMZ and several ethernet ports.

    I have several servers setup for my web service and user support throughout our house and office complex. A single server is connect to the DMZ and is the only system visible to the internet. A second server is setup to provide SAMBA service to my wife and son, and nfs service to my other linux machines. This second linux system is where I do data collection, run business book-keeping and have a pre-web server. The idea here, is to provide a way for my wife and son the create web data on their SAMBA shares, and preview the results before it gets to the main server. The main server does a RSYNC update every four hours and thus is currently only a copy of the intended web pages and not the full SAMBA shares. This gives me both backup of data (spread over two servers not just one) and only public web data is on a drive/system with access to the internet. The idea here is limiting my exporsure to both loss of data due to system failure and possible hacking from the internet. A third safety steps is periodic backing up of data to a USB drive.

    To keep this level of safety going for my LAMP project, there will be two MySQL data bases. The main database will be on the internal server with replication of the data occurring onto the internet facing server. The internal tools and scripts will be used to update data from the sensors. The web based scripts will be used to query the mirrored database and provide web page generation through cgi scripting.

    Setting Up MySQL

    Once the design has been firgured out, the next step is starting to assemble the parts into a completed system. The selected server is already running PostGresSQL for SQL-Ledger, and I could use it, but LAMP is MySQL and thus I went ahead and started another SQL to show that you can have multiple SQL's on the same system without problems. So step one here is getting MySQL up and running - you do that with these commands:

    
    1) Create a location on the backed up drive for storing MySQL data:
       #> mkdir /work/mysql
       #> chown mysql:mysql /work/mysql
    
    2) Remove the normal MySQL location and replace it with a link to our location:
       #> rm -rf /var/lib/mysql
       #> ln -s /work/mysql /var/lib/mysql
    
    3) Run MySQL's setup script to create the needed structure:
       #> /usr/bin/mysql_install_db --user=mysql
    
    4) Start MySQL daemon that actually provides all the services:
       #> /etc/init.d/mysql start
    
    5) Now test that the daemon is actually running:
       #> /usr/bin/mysqladmin version    (should show version number of mysql)
       #> /usr/bin/mysqladmin variables  (should show internal mysql settings)
       #> mysql mysql                    (start command line interface to mysql)
       mysql> SHOW DATABASES;            (show databases created by mysql_install_db)
       mysql> SHOW VARIABLES ;           (show mysql internal settings)
       mysql> help                       (show list of commands)
    
    6) Now make a new database for our collection of temperature readings:
       mysql> CREATE DATABASE solar ;    (create a new database called "solar")
       mysql> USE solar ;                (tell the tool mysql to use "solar")
       mysql> SHOW TABLES ;              (new database - should be no tables)
    
    7) Now create the new data table for holding our 8 data points and date and 
       time of readings - calling the table readings:
       mysql> CREATE TABLE readings (date date, time time, dp1 dec(4,2), dp2 dec(4,2),\
       dp3 dec(4,2), dp4 dec(4,2), dp5 dec(4,2), dp6 dec(4,2), dp7 dec(4,2), dp8 dec(4,2)) ;
    
    8) The results should be a message that it created the table - let us check:
       mysql> SHOW TABLES ;              (should now see the new table "readings")
       mysql> SELECT * FROM readings ;   (should say nothing to show - empty table)
    
    9) Now load the data into the table:
       mysql> LOAD DATA INFILE '/work/webpage/temp.dat' INTO TABLE readings ;
       mysql> SELECT * FROM readings ;   (should show - full table)
    
    

    Well, we sort of got ahead of our self with the last command, because we didn't show you or talk about setting up the data such that MySQL can read it using the command "infile". If you dig into MySQL's docs - you can find out how to add values to the "infile" command that tells it how to separate the data fields. The above command however will use the default values, such as - tabs between fields and new line or "\n" for line ender. But what is our previous readings structure:

    
    2006.09.20 | 13:30:06 |  35.62 |  25.68 |  24.75 |  26.93 |  43.50 |  45.18 |  43.00 |  35.06 |
    2006.09.20 | 14:30:06 |  37.62 |  27.43 |  25.93 |  28.31 |  44.56 |  45.81 |  44.18 |  39.81 |
    2006.09.20 | 15:30:07 |  39.75 |  27.81 |  27.12 |  28.81 |  42.81 |  46.12 |  43.12 |  43.62 |
    2006.09.20 | 16:30:06 |  36.81 |  28.75 |  28.00 |  29.12 |  34.00 |  37.06 |  39.00 |  38.12 |
    
    
    Clearly we have problems here, "|" not tabs, and look at the date with "." between values and not the standard "2006-09-20" it should be. We could tell the "infile" command that fields are separated by "|", but it will fail on the date string for sure. Time to drag out some perl scripts, and "tweak" them help us out. I already take the above table and run a perl script against it to create an HTML page, and so it should only take a few changes to redo this file.

    It took only a few minutes to adjust the old script into a "format changer" and run it against the data output. I noticed my blank entries were not coming out as 00.00 like I wanted, and thus my test for " " needed to be changed to "" - or my early readings were using only one data collector which has 4 temperature probes. I later went to two data collectors with a total of eight readings. I thought those early values were actually a single space, but in fact there were no spaces at all. A very minor change, but one I caught only during testing. Another problem I saw was the first line, and few others were totally blank. I added testing for that, as the HTML conversion was done only to the last 24 lines and was never concerned with a blank line before. It now looks like this:

    #!/usr/bin/perl
    #
    # convert.pl 
    # make temp.tbl compatible for mysql infile.
    #
    $PUBPATH = "/work/webpage";
    $USESQL = "temp.dat";
    # get start time of run
    @TMPLIST=(`cat $PUBPATH/temp.tbl`);
    # uncomment for debugging
    #print "@TMPLIST";
    
    open ( FINDEX, ">$PUBPATH/$USESQL");
    foreach (sort @TMPLIST)
    {
            chomp;
            ($tdate, $ttime, $tt1, $tt2, $tt3, $tt4, $Utt1, $Utt2, $Utt3, $Utt4, $junk) = split(/\|/, $_);
            if ( $tdate eq "" ) { next } ;
            if ( $tdate eq " " ) { next } ;
            #print "$tdate, $ttime, $tt1, $tt2, $tt3, $tt4, $Utt1, $Utt2, $Utt3, $Utt4, $junk\n";
            ($year, $mo, $day, $junk) = split(/\./, $tdate);
            $tdate ="$year-$mo-$day";
            if ( $tt1 eq "" ) { $tt1 = "00.00" ; }
            if ( $tt2 eq "" ) { $tt2 = "00.00" ; }
            if ( $tt3 eq "" ) { $tt3 = "00.00" ; }
            if ( $tt4 eq "" ) { $tt4 = "00.00" ; }
            if ( $Utt1 eq "" ) { $Utt1 = "00.00" ; }
            if ( $Utt2 eq "" ) { $Utt2 = "00.00" ; }
            if ( $Utt3 eq "" ) { $Utt3 = "00.00" ; }
            if ( $Utt4 eq "" ) { $Utt4 = "00.00" ; }
            print FINDEX "$tdate\t$ttime\t$tt1\t$tt2\t$tt3\t$tt4\t$Utt1\t$Utt2\t$Utt3\t$Utt4\n";
    }
    close FINDEX;
    exit(0);
    
    
    Run that on the old table, and out pops one MySQL will import without complaint. After a quick look over the 15000 lines of data converted, restarted "mysql", did "USE solar", and then the "infile" command, followed by a "SELECT *" and now get a 15000 lines of data - about 2 years worth.

    Next is modifying the script again to add data input functions to the solar databse and entries for table "readings". The old way the sampling worked goes like this, crontab or cron runs one script that gets data from collector one, followed two minutes later with a second script that gets data from collector two. The first script starts with date, time, and then four data points. The second script then adds four more data points only with no line ender - that was the job of script one before adding in the date. That way I could have a third or fourth collector using the same second collector script.

    Now I need to make a comment here about my using files to hold data that could just go directly into the database. I have found over the years that using these files allows for more flexible processes and easier debugging. Recently I was debugging a process to find out the problem was caused by a hung process, and thus every process after it was also hung. Had the data been written to a file and not instead depended on the sql server to free the process after taking it's data, only the sql process would have hung and all others would have continued on - putting out no data, but not hanging either. So I like to isolate one process from anothers actions.

    In this case, I can create a script that simply reads the data written by the collector programs, and is already inplace, and have it enter the data into the sql. Since my convert.pl made a "infile" compatible structure from the current data - all I need to do really is add some sql command lines to my convert.pl script. I learned years ago that the command line tools for sql - in this case - "mysql" can take commands from a simple file and do them as if you were instead typing them while in the program. So the simple explanation is - take convert.pl and the same commands I used to add the converted" data and make them one program. It's below.

    
    #!/usr/bin/perl
    #
    # temp_sql.pl
    # convert data to infile format and call mysql to read it.
    #
    ###
    #  variables
    #
    $DEBUG = 1;
    $TBLPATH = "/work/bill/webpage";
    $PUBPATH = "/work/bill/webpage";
    $USESQL  = "temp_sql.dat";
    
    $sql_input  = "$TBLPATH/sql_input";
    $sql_output = "$TBLPATH/sql_output";
    $sql_dump   = "$TBLPATH/sql_dump";
    $DATABASE   = "solar";
    $DBTABLE    = "readings";
    @TMPLIST=(`cat $PUBPATH/temp.tbl`);
    #print "@TMPLIST";
    
    open ( FINDEX, ">$PUBPATH/$USESQL");
    foreach (sort @TMPLIST)
    {
            chomp;
            ($tdate, $ttime, $tt1, $tt2, $tt3, $tt4, $Utt1, $Utt2, $Utt3, $Utt4, $junk) = split(/\|/, $_);
            if ( $tdate eq "" ) { next } ;
            if ( $tdate eq " " ) { next } ;
            #print "$tdate, $ttime, $tt1, $tt2, $tt3, $tt4, $Utt1, $Utt2, $Utt3, $Utt4, $junk\n";
            ($year, $mo, $day, $junk) = split(/\./, $tdate);
            $tdate ="$year-$mo-$day";
            if ( $tt1 eq " " ) { $tt1 = "00.00" ; }
            if ( $tt2 eq " " ) { $tt2 = "00.00" ; }
            if ( $tt3 eq " " ) { $tt3 = "00.00" ; }
            if ( $tt4 eq " " ) { $tt4 = "00.00" ; }
            if ( $Utt1 eq " " ) { $Utt1 = "00.00" ; }
            if ( $Utt2 eq " " ) { $Utt2 = "00.00" ; }
            if ( $Utt3 eq " " ) { $Utt3 = "00.00" ; }
            if ( $Utt4 eq " " ) { $Utt4 = "00.00" ; }
            print FINDEX "$tdate\t$ttime\t$tt1\t$tt2\t$tt3\t$tt4\t$Utt1\t$Utt2\t$Utt3\t$Utt4\n";
    }
    close FINDEX;
    
    # now we do sql query
    #############################
    
       open(OUTPUT, ">$sql_input") || die "Could not open $sql_input";
       print OUTPUT "USE $DATABASE ;\n";
       print OUTPUT "LOAD DATA INFILE '$PUBPATH/$USESQL' INTO TABLE $DBTABLE ; \n";
       print OUTPUT "\n";
       close(OUTPUT);
       print "mysql < $sql_input > $sql_output \n";
       $retval =  (system("mysql < $sql_input > $sql_output "));
    
    #   if ($retval != 0) {
    #       print "debug: Access to mysql failed  - status: $retval \n";
    #       exit(0);
    #     }
    
    $retval =  (system("cat $PUBPATH/temp.tbl >> $PUBPATH/temp_tbl.2006 "));
    $retval =  (system("rm  $PUBPATH/temp.tbl "));
    
    exit(0);
    
    
    It doesn't get much simpler than that. We do several things here, grab the collected data and covert the format, create an sql query command file, feed that command file to mysql program, and then append the just stored data to an archive file (thus can recreate the data id needed), and lastly remove the data file so we don't read it next time. The way it is written, I can take one reading or a whole days worth - which is what I ended up doing - only running the data input once a day just before midnight. I also took this same scipt and reversed it, so that it reads all the data from that day (last data collection at 23:30) at 23:45 and creates the HTML page you see on my entro page. One script "tweaked" several ways to get the needed output.

    At this point, I now have data being collected as always. The data however goes into a MySQL database. The HTML page is generate with the same data as before, but now it comes out of the MySQL database. All that is missing are some user controlled web pages that generate data sets and graphs. Those will come in the next topic discussion and in a new page. As a recap - we started MySQL, created a new database, with a table for holding my temperature data points, we modified a script to process data and read or write the data into the database. All totaled we spent less than eight hours of pounding the keyboard and are now ready for the next step. Stay tuned for:

    User interface using Perl/MySQL

    See the temp table here..

    Kibler Electronics, PO Box 535, Lincoln, CA 95648-0535, USA.
    mail: bill@kiblerelectronics.com

    This is a "Java Free Site" and for the reasons why Read This Article.

    Copyright © 2016, Bill Kibler/Kibler Electronics.