What’s the worst that could happen?

Watch this. Regardless of how you feel about global climate change, this video could help you choose a path of action to the future. Watch it and decide for yourself how you should live.

Greg Craven’s website

For people citing the whole “Climategate” issue where it was thought that data was falsified by scientists, read the Wikipedia article on the whole thing, as there is tons of data there, including the fact that several organizations took a good look at the data and methods used by those scientists and found that there was no misconduct.

Crunchy Peanut Butter Cookies

I made these cookies as travel food for a long car trip we made recently.  They are high in protein, and relatively low in simple carbs, at least compared to many other cookies and car snacks.  They were tasty and provided great energy in the car without causing that dreaded sleepiness that comes from snacking on too many simple carbohydrates.

These cookies had a wonderful peanut butter flavor as well as a great crunch, and they were super easy to make! I didn’t use the optional nuts or chocolate chips; I plan to try using crunchy peanut butter at some point to add some crunch in a quick & easy way.

I found this Crunchy Peanut Butter Cookies Recipe at the Food & Wine Magazine site:

Crunchy Peanut Butter Cookies

TOTAL TIME: 25 MIN
SERVINGS: Makes 2 dozen cookies (or so…)

Ingredients

1 cup smooth peanut butter

1 cup sugar

1 teaspoon baking soda

1 extra-large egg, lightly beaten

2 tablespoons finely chopped peanuts (optional)

1/4 cup mini chocolate chips (optional)

Directions

  1. Preheat the oven to 350° and position 2 racks in the upper and lower thirds.
  2. In a medium bowl, mix the peanut butter with the sugar, baking soda and egg.
  3. (optional) Stir in the peanuts and/or chocolate chips.
  4. Roll tablespoons of the dough into 24 balls. Set the balls on 2 baking sheets, and using a fork, make a crosshatch pattern on each cookie.
  5. Bake for 15 minutes, shifting the baking sheets from front to back and bottom to top, until the cookies are lightly browned and set.
  6. Let cool on a wire rack.

Almond Biscotti

Original recipe is in normal text, with my adjustments and notes in italics between the curly braces after each line.

2 sticks butter, room temp
1 3/4 cup sugar
6 eggs
2 tsp. vanilla
1 tsp. almond flavoring
1 tsp. baking powder
1 cup sliced almonds
6 cups flour

  • Cream butter and gradually add sugar.  Mix well.
  • Add one egg at a time; cream well until fluffy.
  • Add vanilla and almond flavorings.
  • Add sliced almonds.  {I once forgot to add them at this point, and added them after the flour, just before kneading…it worked pretty well that way, too!}
  • Add baking powder and flour gradually and mix well.
  • Turn dough out onto floured surface.  Knead briefly {I found that I liked them better if I kneaded the dough a bit more than “briefly” – something like ten minutes.} like you would with bread dough, adding flour a little bit at a time until you can form loaves (use as little flour as you can).
  • Shape dough into two loaves (or four if you want smaller biscotti).
  • Put loaves on foil-lined cookie sheets.  Flatten loaves a little bit with your hands.  {Pat the loaves down wide and flat, as the dough will mostly retain the shape you give it here. I also don’t use foil, I just put them directly onto my stainless air-bake cookie sheets.}
  • {I also use a little milk or water (VERY LITTLE) to moisten the top of the loaves and sprinkles some sliced almonds on top for effect.  I like it, anyway.}
  • Bake at 350 degrees for 35 minutes or until golden brown. {Don’t want to overbake at this point, as you’ll toast them more below.}
  • Cut into slices. {Looks nicest if you cut on an angle.}
  • Lay slices flat on cookie sheets and put back in oven at 400 degrees to toast for 5 minutes on each side. {I found that they were more “biscotti-like” if you toasted them for longer, more like 12-15 minutes per side, especially since the loaves were sometimes still pretty moist in the center, but brown enough on the outside that you wouldn’t have wanted to bake them any longer as a whole loaf. Keep an eye on them if you extend the time – you definitely don’t want them to burn!}
  • Let cool, and enjoy!

QuickBooks Pro 2008 Unrecoverable Error upon start

I just rebuilt a computer at work due to a motherboard failure, and upon reinstallation, had a problem with QuickBooks Pro 2008.  When I tried to run the program, all I got was:
“QuickBooks unrecoverable error” as the title of a message box, with contents indicating that the program couldn’t start and asking if I would like to send an error report.

Other key info related to this error was the error code “00585 53668″ at the bottom of the message box, and the error code “0xc0000005″ in the one of the error report’s XML contents.  I searched the usual search engines with a variety of terms and most of the solutions I came across related to uninstalling and reinstalling the .NET Framework (version 2.0) – Of course I did this, to no avail.  Other potential solutions involved reinstalling QuickBooks (which I did, but as expected, resolved nothing, as I had just freshly installed the app in the first place.)

Then, I ran across one post on fixya.com titled “Went to open Quickbooks pro” where another user was also having problems opening QB due to a DLL error.  The recommended solution in this case was to run the “reboot.bat” file in the QB program directory.  I checked the contents of this batch file, and it appeared to re-register a bunch of DLLs.  I ran it, and viola! Instant success.

Relevant to this, but unmentioned above, was the fact that I had reinstalled WinXP Pro onto my user’s existing hard drive from the failed computer, so Windows was automatically reinstalled into “C:\WINDOWS.0″ instead of the usual “C:\WINDOWS” – this was made relevant to this error due to fact that all throughout the reboot.bat file, the commands were registering DLLs in the new windows.0 folder…

So, if you have any QuickBooks Pro 2008 startup problems after reinstalling Windows and QuickBooks, especially if you are using a non-standard install directory for Windows, try running the reboot.bat file in the QuickBooks program directory (typically “C:\Program Files\Intuit\QuickBooks 2008″)

Good Luck!

(This post created to grab search results for QuickBooks Pro 2008 error code 00585 53668 related to an unrecoverable error when starting the program, especially directly after a reinstall onto a used hard drive where Windows is installed into a different directory than “C:\WINDOWS”.)

su: invalid script: /usr/libexec/auth/login_passwd (OpenBSD)

I had to replace a host-based firewall, based on OpenBSD, that had some failing hardware this week.  I managed to get the failing system up and running and made a full back up of the system using tar over SSH to a remote computer.  (Thanks to Trinity Rescue Kit for the help! I ran TRK on a Windows laptop I had borrowed from a co-worker to run a temporary SSH server.)  Upon restoration, I just had to tweak one file, /etc/fstab, to adjust for the different disk layout of the replacement system.

I ran into one issue after restoration, though: when I logged in as an unprivileged user, and attempted to su to another user, I got the error: su: invalid script: /usr/libexec/auth/login_passwd

Well, it turns out that I forgot the -p option to tar when I unpacked the tarball onto the new system…this option is used to preserve uid, gid, and file mode, as well as the setuid and setgid bits if the user is the superuser/root.  Well, /usr/bin/su is owned by root, and needs the setuid bit set in order to work properly.  The proper fix for this would be to unpack the tarball again with the proper settings, but in a pinch, you could just apply the setuid bit to /usr/bin/su.  (Please realize that in this case, you may have missed other files where the setuid/setgid bits should be set, so this is not the best solution, but can definitely help in a pinch.)

I found this information via this post in the Kernel Trap archives.  Thanks to “Walt” for posting this info to be archived and indexed by the search engines!  It certainly helped me this week!

EDIT: Definitely go through and use tar with the appropriate options to preserve permissions and special bits on files!  All files will be owned by the user that unpacked the tar archive (root:wheel in my case) if you don’t specify the “-p” option to tar to preserve permissions, ownership, and etc.

Major WordPress blog performance problem solved

In a previous post, WordPress woes, I went through the procedure I used to solve some performance problems related to how the IP address on this server is NATed.  Well, there was another problem somewhere that caused the WordPress admin to be incredibly slow most of the time, caused Akismet to not work at all, and a variety of other symptoms.

I went through a bit of diagnosis, and I came across this in my /var/log/php-errors.log (I enabled this logging in /etc/php.ini to track what was happening):

php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /var/www/sitename/scriptname.php on line 20

I checked my /etc/hosts and /etc/resolve.conf and /etc/host.conf files, and everything was fine there.  In addition, DNS resolution worked fine from the command-line via dig and ping.  I tried running my simple test script through the php command-line client, and it worked fine from there.  I checked the script in a browser, and no luck.  It was loading in the browser until the DNS lookup in the script timed out, then it finished loading.

So, I did that again, but this time, I ran

netstat -an

on the command line while the script was attempting its DNS lookup.  It turns out that Apache was looking at an old DNS server, which is not accessible via this network.  Oiy!

This makes a little bit of sense to me, as this server was originally setup in a convenient location on a different network, and then physically moved the server to the colocation data center.

Well, I don’t know why Apache still thought it should look at the old DNS server (the one from the other network) but I made sure there was no reference to it in old /var/lib/dhclient.leases files and then ran

apachectl stop
apachectl start

And, viola! The problem went away!  All the other times, I just ran

apachectl graceful

which apparently wasn’t clearing out it’s memory of the old DNS server, or it was still looking at the (not currently relevant) dhclient lease file, which is silly, since I now have the server setup on networks with dedicated IPs, and I am not using dhcp on any interface.

Weird.  But, solved!

Crawl Rate Tracker plugin missing chart on CentOS 4.x

Well, as part of my WordPress install, I am using the Crawl Rate Tracker plugin. It shows the hits on your blog from various spiders and so on. In it’s dashboard (in your WordPress admin) there is a nifty chart that visualizes the info. Or, at least, there should be. In my case, I got a text link that just output the URL of the php script that should have generated the chart.

By looking into sbtracking-chart-data.php, I found that as it incremented a value to track the date, it hit the number 1225684800, which is apparently the max integer value in PHP 4.3.9, which is the version that was included in CentOS 4.x, with security patches backported. This value corresponds to some point during November 2, 2008. Well, as you well know, we are past that point in history, causing this PHP script to loop infinitely, since it never errored out upon incrementing that variable, it just stuck at the value 1225684800, and it was using the date as a means of breaking the loop.

To resolve this, I upgraded to PHP 5.1.6, which also required a MySQL upgrade. (The way I did it, anyway, which was the fast & easy, CentOS “semi-supported” way to do it.) I edited the file /etc/yum.repos.d/CentOS-Base.repo to make the centosplus repository available by changing enabled=0 to enabled=1 and then adding this line under the same repository:

includepkgs=php* mysql*

which restricts the upgrades and installs from this repository to the php and mysql packages.

I ran yum update and let it download and install all the necessary packages. This put me at PHP 5.1.6 with some extra security patches and MySQL 5.0.68 with the same.

Upon an Apache restart (apachectl graceful) I tested the Crawl Rate Tracker plugin, and AHA! The chart is there, and as nifty as ever.
Goodnight.

Poor WordPress admin performance

During the journey of installing WordPress, choosing a theme, setting up plugins, and so on, I ran into a problem that oh-so-many out there have hit: terrible performance in the WordPress admin (downloadable version of WordPress, not at wordpress.com)

Well, to make a long story short this time: Do as everyone recommends, despite believing that the problem occurred before a certain point, or isn’t related…just do it: Disable all plugins which should get your performance back to something more acceptable. Once you are that point, enable your plugins one at a time until you hit your performance snag. I won’t list the problematic one for me, as yours will probably be different.

Just do it. You’ll be glad you did, even if for the fact that then you can be certain that it isn’t your plugins.

Just do it.

WordPress woes

Well, as my first post, let me touch on some wordpress performance problems, as I just ran into said problems while getting this blog running.

So, I setup this virtual server a couple weeks ago.  On it, I installed a variety of things, including some other PHP-heavy sites.  All of them work great; faster than I expected, in fact.

Today, I installed WordPress in a subfolder on an Apache VirtualHost (specifically, david.pryke.us, which, since you are here and reading this, you already know.)  I performed the basic setup of wp-config.php and then navigated to the site to complete the setup.  This took a while to run, but I figured that perhaps the MySQL instance on this server was taxed, and it just took a while to setup the basic database.

Once I clicked on the “Log In” button at the last screen, I realized that something was terribly wrong.  It took 40 seconds or so to load the basic login page for me to enter my username and password.  Once I entered them, it took another 40 seconds or so to get to the admin.  I tried just loading the basic blog, without loggin into the admin…40 seconds.  I started looking for WordPress performance problems on Google.  Many of the results pointed me to resort to the default “Kubrick” theme and start from there (no problem, it was a default install, I’m already using that theme.) After that, it indicated to disable all the plugins and start enabling them one at a time, taking note of when big slowdowns start to happen.

Well, since this was a fresh, default install, I figured that was’t the major problem.  While I am a SysAdmin, and have no problem going through things one at a time, I also hit on a post by Paul Spoerry about how to Diagnose slow WordPress performance using FireBug.  I found that to be a great idea, and one that I should have thought of by this point, as a lot of my coworkers use FireBug to look at problems during website development.

So, I installed FireBug and inspected my site with it.  Wow.  40.532 seconds for the basic index.php page, and then a few hundred milliseconds at worst for the rest of the jpg’s and such combined.  So, I start looking for WordPress performance diagnosis, and I come across a WordPress forum topic regarding a similar problem.  In there, I found to insert code like:

<!-- <?php echo get_num_queries(); ?> queries. <?php timer_stop(1); ?> seconds. -->

which, I found out later, was already in the Kubrick theme.  One other thing I found and enabled in my footer.php was this:

You can see how long each query is taking with a few modifications.

In your wp-config.php file, add this line of code to the top:
define('SAVEQUERIES', true);

Then, in the theme’s footer.php file, you can do this to dump all the queries and how long they took:

if (SAVEQUERIES) {
global $wpdb;
echo '<!--\n';
print_r($wpdb->queries);
echo '\n--!>';
}

Then looking at the source of the page will show all the queries and a number showing how long they took.

After you do this, turn the SAVEQUERIES back off by setting it to false in the wp-config.php file. You don’t want it to spit out the queries all the time.

The key there, which I knew, but some other readers on that forum topic didn’t, was to put “<?php” before the code block, and “?>” after the code block in footer.php.  I looked at the source of the page after I added those pieces of code, and it told me two things.  The first line output this:

<!-- 21 queries. 40.195 seconds. -->

Which told me that it believed the database queries were taking over 40 seconds to perform.  However, the second piece of output (from $wpdb->queries) told me something totally different.  This command lists the SQL for each query, as well as how long it took to run that query.  Each on was along the lines of 0.0001130104064941 or 0.00027084350585938, which, when added together, was still much less than one second.  Something isn’t “adding up” here…

After reading the rest of that forum topic, someone mentioned a problem which went away when he ran the internal “wp-cron.php” script by hand, but came back every time he created a new post.  Well, there are two important pieces of information here.  One is that this internal cron script is scheduled to run again when certain actions are taken, such as creating a new post.  The second thing, and important in my case, is that this is run from within the web server itself…specifically, from within the PHP parser.

Now, a key piece of info for my problem is that I am hosted on a virtual machine that lives on a non-routable IP, an RFC 1918 IP of 10.53.22.13; this is important in that the public IP of this site is 66.179.100.13 (at least, as of this writing, on November 3rd, 2008.)  The PHP parser tried to connect to david.pryke.us/blog/….. and could not get there because the firewall & NAT machine “in front” of the server would not redirect traffic back down this network link to the 10.53.22.13 address when one of the machines on the same link asked for the public, routable IP of 66.179.100.13;

To resolve this, and make the long story short, I had to add a line in /etc/hosts that read:

10.53.22.13     david.pryke.us

Which allowed the server to see the “correct” IP for that domain name (david.pryke.us) and vioala! the server loaded the first default post in less than a second!

Problem solved.  (This should never have been a problem, as I usually setup the hosts file on these servers right away…but I forgot in this instance. Oops!)

Virtualization: What is it? or “Virtualization vs. Emulation”

Today I was inspired by an article in the February 27th issue of Infoworld Magazine written by Tom Yager titled, “What virtualization is — and what it isn’t,” which was regarding misuse of the term “virtualization.” He goes on to briefly identify a few ways that the term is used correctly, as well as incorrectly, in some modern software products, namely VMWare software, Apple’s Rosetta binary translator, and Microsoft’s Virtual PC for Mac. I’m going to take a slightly different direction here, but I’ll touch on those products.

Definitions: Virtualization and Emulation

Virtualization is a broad term that refers to the abstraction of resources across many aspects of computing. The context I will be discussing today is with regards to local hardware virtualization. What this means is that multiple operating systems, or multiple copies of the same operating system (OS), can run simultaneously on a single set of hardware, and retain access to all aspects of the hardware. When one or more of the installed operating systems requests access to a piece of hardware, the layer that performs the virtualizing intercepts that call, and if the hardware is currently being used by another instance of an installed OS, it schedules the hardware call to happen as soon as possible. If the hardware is available, or once it becomes available, the call is passed on to the hardware and any responses from hardware are directed right back to the calling OS. This is a very fast process, as there is minimal interaction here, and the installed operating systems run at near full speed. (See my previous post on Xen Virtualization to get a brief look at one way that this can work.) Emulation is recreating an entire hardware architecture in software, then typically running an OS under that (though it could be used in a much “smaller” way, such as running a single program or even translating a single instruction.) As you can probably imagine, a program that acts like it is an entire piece of hardware is hardly simple and is typically much slower than the real hardware it is emulating.

Which to use when, and why?

Emulation is handy when you want to run an operating system or program from a completely different system on your machine. Think, for example, of playing Super Nintendo games on your computer, or running a Commodore 64 program under Windows or Mac OS. This can also be used for things like developing software that would be encoded on a chip to would be embedded in a consumer product, such as a calculator, a remote control, a television, or even a clothes washer! Emulation is likewise good when you are developing a new hardware product, such as a CPU, but want to get the design right before you manufacture ten, a thousand, or a million of them. You can create the entire hardware architecture in software and then work under that software as though you have the final device right in front of you. If you find bugs in the design, or something you could optimize to work better, you can change the emulation software; Once you have everything designed the way you want it, you can send the design out to be prototyped, tested, and manufactured. Virtualization, on the other hand, is used when you want to run multiple OS’s (or multiple copies of an operating system) on a single machine. One reason you might do this is if you are designing a distributed system (such as a cluster of machines) and you are trying to develop some software (such as communications protocols) that requires testing across many machines at once, but you only have one or two machines with which to test. (This example works best when you have multiprocessor/multicore machines to work with.) Another reason is if you want to run Windows, Linux, and FreeBSD simultaneously on one machine…without “dual-booting” or using emulation! (Examples chosen at random…many other combinations are possible, and I am not endorsing any particular product, nor trying to slight any product.) A third, and particularly useful, use of this kind of virtualization is for if you want to separate out individual parts of a complex system…such as a mail server solution. One way (and there are many ways) you could separate this example is to have the MTA (mail transfer agent, the actual program that receives the mail) running on one virtual machine, run an anti-virus/anti-malware scanner on a second virtual machine, and have a webmail interface running on a third virtual machine. (As you may guess, there are a number of other ways to set up this system, as I have glossed over a few parts of this complex system, such as the mail store, imap & pop servers, databases to store virtual addresses, and more.) This would allow you to use just one machine to accomplish this goal, while giving each conceptual part of the system it’s own dedicated resources…and no single part of the system could bring down another part. (Have any of you ever had an Apache server running a webmail client use all your available memory, causing failure or extreme delays in the MTA that is trying to receive e-mail? It can happen…)

Final thoughts for tonight…

Well, I hope to have brought a little insight to the question of what virtualization and emulation are in one context, and given enough examples to give you an idea of how each works as well as some potential uses for each. It turns out that I didn’t mention the products from the first paragraph again, but there is tons to talk about when it comes to virtualization, even when used exactly as it is defined here, and there a plenty of other meanings to the term. So, don’t be surprised if I end up talking about these concepts often…it is a field in which I am interested and use in my everyday life as a systems administrator.