Welcome to my website. I am always posting links to photo albums, art, technology and other creations. Everything that you will see on my numerous personal sites is powered by the formVistaTM Website Management Engine.


  • Subscribe to this RSS Feed
  • Restarting XServer in Fedora 20
    03/24/2014 9:49AM

    I was having problems logging in this morning to my laptop.  When I entered my password, it just hung.   I pressed Ctrl-f2 to switch to an alternate tty, logged in as root and checked for errors in /var/log/messages.

    Not seeing anything, I figured, I'd try and restart the Xserver.  Still not being completely familiar with the sysctl paradigm it wasn't obvious how to restart it.

    So, as root, I simply switched the runlevel to 3, and then back to 5 via the following commands, restarting the Xserver and was then able to login.

    # telinit 3

    .... wait for a bit ....

    # telinit 5

    Login, get to work.

  • s3cmd 'ERROR: Test failed: 403 (AccessDenied): Access Denied' and 'ERROR: Config: verbosity level '20' is not valid' [SOLVED]
    03/19/2014 10:17AM

    I'm working on a project that includes sending data via Amazon Simple Storage Service (S3) and was having some problems configuring and using the s3cmd client.

    The first thing I discovered about s3cmd is not to trust what it tells you when invoking s3cmd --configure to get things set up to use the bucket.

    $ s3cmd -v --configure s3://some-bucket/some-prefix/

    Enter new values or accept defaults in brackets with Enter.
    Refer to user manual for detailed description of all options.

    Access key and Secret key are your identifiers for Amazon S3
    Access Key: thisisanaccesskey
    Secret Key: thisisasecretkey

    Encryption password is used to protect your files from reading
    by unauthorized persons while in transfer to S3
    Encryption password:
    Path to GPG program:

    When using secure HTTPS protocol all communication with Amazon S3
    servers is protected from 3rd party eavesdropping. This method is
    slower than plain HTTP and can't be used if you're behind a proxy
    Use HTTPS protocol [No]: Y

    New settings:
      Access Key:
      Secret Key:
      Encryption password:
      Path to GPG program: None
      Use HTTPS protocol: True
      HTTP Proxy server name:
      HTTP Proxy server port: 0

    Test access with supplied credentials? [Y/n] y
    Please wait, attempting to list bucket:
    ERROR: Test failed: 403 (AccessDenied): Access Denied

    Retry configuration? [Y/n] n

    Save settings? [y/N] y
    Configuration saved to '/home/rchapin/.s3cfg'

    As you can see, when I ran configure and opted to test the configs, I got a 403 error.  At that point, I assumed that I didn't have acces to the bucket and went back to the client to try and figure out if I had the right key, if they set up the bucket with the right permissions, blah, blah, blah.

    It turns out, that s3cmd simply gave me incorrect information, or the command that it was using to test it wasn't valid, or it was trying to do something with the bucket that I didn't have permission to do.

    After running the config above, I tried:

    $ s3cmd put test.txt s3://some-bucket/some-prefix/
    ERROR: Config: verbosity level '20' is not valid
    test.txt -> s3://some-bucket/some-prefix/test.txt  [1 of 1]
     15 of 15   100% in    0s    67.58 B/s  done

    Turns out that I have access after all.

    $ s3cmd ls s3://some-bucket/some-prefix/
    ERROR: Config: verbosity level '20' is not valid
    2014-03-01 00:25         0   s3://some-bucket/some-prefix/
    2014-03-19 14:06        15   s3://some-bucket/some-prefix/test.txt

    It also turns out that appending the '-v' arg when configuring s3cmd causes it to throw the 'ERROR: Config: verbosity level '20' is not valid' error.

    If you delete the .s3cfg file in your home dir, and re-run s3cmd --configure without the -v command it should work as expected.

    Just don't trust the s3cmd --configure test . . . test it yourself and you might find that you have access already.

  • Clone and Backup a Bootable USB Drive
    03/09/2014 10:19AM

    We recently got a new ASUS laptop for the boys to use (I'll use it too, it's pretty sweet) which came with Windows 8.

    It did not come with the install CD or license key, but included a recovery partition and the key in the BIOS.  Now that we've had it for a few weeks and verified that all of the hardware works, we are going to put Ubuntu on it, but I wanted to make sure that I would still be able to use the Windows 8 license on it if I wanted.

    So, using the Win8 recovery program, I createad a bootable recovery disk onto a USB stick and I wanted to back it up, as well as be able to make a clone of it if need be.

    Following are the dd commands to make that happen:

    First, do a tail of /var/log/messages before you plug in the usb drive.  You should see it be recognized by the machine as sd[something].  Or, you can do an fdisk -l and should see the usb stick (as well as the other drives on your machine)

    Be warned, make sure that you have the devices correct before you run these commands or you may destroy data on your machine.

    Assuming that the usb stick is sdg, clone the disk to a file on another computer

    dd if=/dev/sdg of=./windows_8_rcvry_usb_asus.dd conv=notrunc

    Copy the file to another USB stick (assuming that /dev/sdg is the USB drive because all data on /dev/sdg will be destroyed during this operation):

    dd if=./windows_8_rcvry_usb_asus.dd of=/dev/sdg conv=notrunc

    Just make sure that the usb drive to which you are copying is the same size or larger than the original one that you copied from.

  • Creating a Beep from a Command Line or Shell Script
    02/03/2014 5:51PM

    If you have a long-running command on shell-script that you want to generate a beep upon completion on your PC running Linux do the following:

    . Make sure that the pcspkr module is loaded:

    # modprobe pcspkr

    . Then create a wrapper shell script that looks something like this:


    # Some long running command here . . .

    echo -e '\a' > /dev/console

  • Eclipse Crashing with SIGSEGV, Problematic Frame libgdk and/or libsoup Problem Solved
    01/31/2014 9:49AM

    I'm setting up a new workstation under Fedora Core 20 and getting my dev environment set up.

    I had copied over my /opt dir from my old machine which included an older version of Eclipse (3.8.2) that I had been using.  That version wasn't behaving very well and I decided to go with the latest and greatest stable version (Kepler, 4.3.1).

    Unfortunately, Kepler was dumping core with the following error:

     A fatal error has been detected by the Java Runtime Environment:

      SIGSEGV (0xb) at pc=0x00000030f703d09a, pid=2450, tid=139984564643584

     JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
     Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops)
     Problematic frame:
     C  [libgdk-x11-2.0.so.0+0x3d09a]  g_param_spec_object+0x3d09a

     Core dump written. Default location: /home/rchapin/core or core.2450

    I realized that I had installed Acrobat Reader, and since I'm on a 64 bit architecture that included all of the i686 rpms and compatibility libs.  I thought that that for some reason there might be some confusion between which version of libgdk that was being used.  That wasn't it.  I tried a different JDK (Oracle vs OpenJDK), nope, that wasn't it either.

    Eventually, I tried deleting (actually moving aside) the .eclipse/ dir in my home dir and deleting all of the .classpath, .settings, and .project files and dirs in my workspace and then re-installing my Eclipse plugins for Kepler.

    Worked like a charm.

    What I think was happening was that some of the plugins for different versions of Eclipse were being pulled in at runtime and causing the Kepler binary to crash.

Advanced Search