Author - Web Developer - Educator
Found 7 results for tag "mysql"
RSS Feed

My Journey into Git

Git logo

My Journey


My journey to discovering git was not an easy one. Since git was not covered in my formal programming lessons, I had to learn "version control" the hard way - the very hard way.

When I started learning HTML back in 2001, I learned to save and modify local files only. In 2002, I learned PHP and MySQL by working on a team server, but version control was not introduced (git was introduced to the world in 2005, but options like CVS or SVK weren't taught). My first local server came in 2004, when I learned to FTP files, instead of testing local files only.

I quickly updated to SSH'ing into the server and editing the direct code, and backuped the files using a combination of file extensions (such as .bak, date stamps like.20150515, and "Backup 20140523" folders).

...and I used this method up to about a year ago, when I took a git intro course online. Unfortunatly, I didn't quite understand the purpose of git because the demos kept talking about files that either already existed, or branches that were made by someone else - never really diving into the underlying purpose of git, which was version control.

I also couldn't get used to the idea of uploading my database login information (along with other private code) to the public site GitHub, which is great for open-sourced projects (see mine here), but not great for a company version control backup. So I dabbled in git for a while, until it hit me about a month ago: I could use git on my own server(s) and not have to deal with private repositories on GitHub (or the prices).

So, I started using git on my server for backups of projects site-wide. Then, I wanted to deal with errors on a non-production server basis (I didn't want the world to see my error testing, because that would be unprofessional). After getting a local Apache server, and MySQL server, and installing PHP (on Windows), I thought that I could use the GitHub software to create backups - but that was only for the GitHub site; I needed something to work with my own existing servers. Therefore, I installed Cygwin and got the Linux-enabled commands with git to my production server.

Overall, this is what I learned, and I hope it helps others:
The Process
The "direct" process vs the git process

Usage


(If you don't already have git set up on your remote server, then please do so by installing it - I recommend sudo apt-get install -y git. Otherwise, this won't work, and it's just a bunch of lines of code)
The SetupRemote (seperate from production folder):
mkdir [dir].git && cd $_
 
git init --bare
 
cp ~/post-receive.sample hooks/post-receive

The hooks folder deals with webhooks to automatically catch incoming files and do something with them (or at least, that's what I've discovered). To make the hooks run properly, you need a post-receive file, and this is what it should look like:
post-receive.sample
#!/bin/sh
 
GIT_WORK_TREE=[absolute path to production] git checkout -f


Once you have that set up, you can work on your local machine to create and modify files.
Local development machine
# If files do not exist
 
git clone [email protected]:[dir].git 
 
cd [dir]
 
# If files DO exist/update
 
cd [dir]
 
git pull
 

 
# Time for editing
 
vim [file]
 
[...]
 
[work on files, test on development machine]
 
[...]
 
(Ready to upload to production server)
 
git add [file(s) - * works as well]
 
git commit -m "Relevant message to update"
 
git push


One important note: As I have learned, your Git folder is not your production folder. I had my .git folder in my production folder of a project, and it was good for local editing, but not remote pulling. If your project folder (example: project.git) is somewhere else, like your home folder, then you can use the post-receive webhook to automatically pull committed files to the production area

Bonus: SSH Login w/o PW


Unless you want to type in your login information every time you git push, I recommend setting it up so your local machine/development server can automatically upload to the production server. If you are running a Linux system (and I recommend you do), then you can do the following:
SSH Password-less LoginOn your local machine (hopefully a Cygwin or Linux/Mac Terminal)
ssh keygen -t rsa
 
ssh [email protected] 'mkdir -p .ssh'
 
cat .ssh/id_rsa.pub | ssh [email protected] 'cat >> .ssh/authorized_keys'
B = Remote server



Summary


There is still quite a lot I don't understand about git (like branches, merging, etc), but I am getting better. I've been using this guide as a resource, as many other guides are very technical for a non-git person.

My takeaway: Better integration into team development instead of just "solo development"


Tags:#git #development #php #mysql #html #linux

RSS Feed

Update Youtube Video Privacy with API and OAuth

OAuthI posted this on my Facebook and Github, but I thought I'd post it here for a more public audience.

Recently, while working on Japanoblog, I realized that there was a problem: when we created videos, we would post them for our Patreon supporters 3 days early so they could get early access to them. However, I had a few options:
  1. I could post a separate video to Youtube/Vimeo/etc and share the link, but then wait 3 days and upload the real video and take down the temp one (which would mean 2x the work, and misleading stats)
  2. I could make the video "Unlisted" and share the link with our Patreon supporters, then wait 3 days and change the video to "Public" for all, or
  3. Do the 2nd option, but then have the server automatically change the status to "Public" when the video was ready to go public (as per the blog post publishing date

Well, being the pragmatic programmer that I am, I figured out how to make the server do it for me. But it was not an easy task (apparently, nobody had done this before - at least, on a public searchable point). After scouring the internet to dissect the Youtube API, Google OAuth (along with token creation), researching and pulling parts from about 7 different public user projects, and some patient testing, I finally have it...I hope.

This script is based on Dom Sammut's code and the Youtube Sample Code (PHP #1).

(Don't want to copy/paste? Here's the Github repository)

So, without further ado, here is what I have come up with:

First: get your tokens


You need to generate your tokens to get the process started.
<?php
 

 
#Primary code from https://www.domsammut.com/code/php-server-side-youtube-v3-oauth-api-video-upload-guide/
 
#Create Client ID and Client Secret by creating OAuth credentials 
 
# at https://console.developers.google.com/apis/credentials
 
# MAKE SURE YOU UPDATE YOUR REDIRECT URL TO MATCH!!!!!!!!!
 
$CLIENT_ID = "XXXXXXXXXXXXXX.apps.googleusercontent.com";
 
$CLIENT_SECRET = "XXXXXXXXXXX";
 
$application_name="APPLICATION_NAME";
 
 
 
// Call set_include_path() as needed to point to your client library.
 
#set_include_path($_SERVER['DOCUMENT_ROOT'] . '/directory/to/google/api/');
 
#Download the PHP Client Library from Google at https://developers.google.com/api-client-library/php/
 

 
#This has been installed using Composer - update if you download the files directly
 
set_include_path(get_include_path() . PATH_SEPARATOR . '/PATH/TO/vendor/google/apiclient/src/');
 
require_once 'Google/Client.php';
 
require_once 'Google/Service/YouTube.php';
 
session_start();
 
 
 
/*
 
 * You can acquire an OAuth 2.0 client ID and client secret from the
 
 * {{ Google Cloud Console }} <{{ https://cloud.google.com/console }}>
 
 * For more information about using OAuth 2.0 to access Google APIs, please see:
 
 * <https://developers.google.com/youtube/v3/guides/authentication>
 
 * Please ensure that you have enabled the YouTube Data API for your project.
 
 */
 
$OAUTH2_CLIENT_ID = $CLIENT_ID;
 
$OAUTH2_CLIENT_SECRET = $CLIENT_SECRET;
 
#$REDIRECT = 'http://localhost/oauth2callback.php';
 
$REDIRECT = 'http://YOUR_URL.com/oauth2callback.php';
 
$APPNAME = $application_name;
 
 
 
 
 
$client = new Google_Client();
 
$client->setClientId($OAUTH2_CLIENT_ID);
 
$client->setClientSecret($OAUTH2_CLIENT_SECRET);
 
$client->setScopes('https://www.googleapis.com/auth/youtube');
 
$client->setRedirectUri($REDIRECT);
 
$client->setApplicationName($APPNAME);
 
$client->setAccessType('offline');
 
 
 
 
 
// Define an object that will be used to make all API requests.
 
$youtube = new Google_Service_YouTube($client);
 
 
 
if (isset($_GET['code'])) {
 
    if (strval($_SESSION['state']) !== strval($_GET['state'])) {
 
        die('The session state did not match.');
 
    }
 
 
 
    $client->authenticate($_GET['code']);
 
    $_SESSION['token'] = $client->getAccessToken();
 
 
 
}
 
 
 
if (isset($_SESSION['token'])) {
 
    $client->setAccessToken($_SESSION['token']);
 
    echo '<code>' . $_SESSION['token'] . '</code>';
 
}
 
 
 
// Check to ensure that the access token was successfully acquired.
 
if ($client->getAccessToken()) {
 
    try {
 
        // Call the channels.list method to retrieve information about the
 
        // currently authenticated user's channel.
 
        $channelsResponse = $youtube->channels->listChannels('contentDetails', array(
 
            'mine' => 'true',
 
        ));
 
 
 
        $htmlBody = '';
 
        foreach ($channelsResponse['items'] as $channel) {
 
            // Extract the unique playlist ID that identifies the list of videos
 
            // uploaded to the channel, and then call the playlistItems.list method
 
            // to retrieve that list.
 
            $uploadsListId = $channel['contentDetails']['relatedPlaylists']['uploads'];
 
 
 
            $playlistItemsResponse = $youtube->playlistItems->listPlaylistItems('snippet', array(
 
                'playlistId' => $uploadsListId,
 
                'maxResults' => 50
 
            ));
 
 
 
            $htmlBody .= "<h3>Videos in list $uploadsListId</h3><ul>";
 
            foreach ($playlistItemsResponse['items'] as $playlistItem) {
 
                $htmlBody .= sprintf('<li>%s (%s)</li>', $playlistItem['snippet']['title'],
 
                    $playlistItem['snippet']['resourceId']['videoId']);
 
            }
 
            $htmlBody .= '</ul>';
 
        }
 
    } catch (Google_ServiceException $e) {
 
        $htmlBody .= sprintf('<p>A service error occurred: <code>%s</code></p>',
 
            htmlspecialchars($e->getMessage()));
 
    } catch (Google_Exception $e) {
 
        $htmlBody .= sprintf('<p>An client error occurred: <code>%s</code></p>',
 
            htmlspecialchars($e->getMessage()));
 
    }
 
 
 
    $_SESSION['token'] = $client->getAccessToken();
 
} else {
 
    $state = mt_rand();
 
    $client->setState($state);
 
    $_SESSION['state'] = $state;
 
 
 
    $authUrl = $client->createAuthUrl();
 
    $htmlBody = <<<END
 
  <h3>Authorization Required</h3>
 
  <p>You need to <a href="$authUrl">authorise access</a> before proceeding.<p>
 
END;
 
}
 
?>
 
 
 
<!doctype html>
 
<html>
 
<head>
 
    <title>My Uploads</title>
 
</head>
 
<body>
 
<?php echo $htmlBody?>
 
</body>
 
</html>


Now that that's all set, save the response to a file (I recommend "the_key.txt"), and modify and run the following:
<?php
 
/**
 
 * This code is to be run automatically to update a Youtube video's privacy status
 
 *
 
 * First, generate your key using "get-token.php" - read the notes below for generation
 
 * Next, update this file with the appropriate information (path to key file, Client ID, 
 
 *    Client Secret (OAuth Required), Application Name, Database Login, Database Query, and
 
 *    location of PHP Client Library - all download information is below)
 
 * 
 
 * @author Kyle Perkins
 
 * @site https://github.com/kode29/google-youtube-api-privacystatus
 
 * 
 
 * NOTICE: Rest of copyright should be in tact for other scripts (Dom Sammut (domsammut.com) and Ibrahim Ulukaya (Google)
 
 * Last Update: 20160108
 
**/
 

 
#Primary code from https://www.domsammut.com/code/php-server-side-youtube-v3-oauth-api-video-upload-guide/
 
# Mixed with sample code from https://developers.google.com/youtube/v3/docs/videos/update (PHP #1)
 

 

 
#Generate the "the_key" with get-token.php and store it into "the_key.txt" or wherever you want to store it BEFORE running this script.
 
# Also, make sure "the_key" has a REFRESH TOKEN!
 
$key_file = "/path/to/the_key.txt";
 

 
#Create Client ID and Client Secret by creating OAuth credentials 
 
# at https://console.developers.google.com/apis/credentials
 
# MAKE SURE YOU UPDATE YOUR REDIRECT URL TO MATCH!!!!!!!!!
 
$CLIENT_ID = "XXXXXXXXXXXXXX.apps.googleusercontent.com";
 
$CLIENT_SECRET = "XXXXXXXXXXX";
 
$application_name="APPLICATION-NAME";
 

 
#CHeck the DB for updated videos
 
$video_list=array();
 
    $dbh = new PDO('mysql:host=localhost;dbname=DATABASE_NAME', "DATABASE_USER", "DATABASE_PW");
 

 
	$sql="select `video` from `TABLE` where `stamp` like '".date("Y-m-d H:i:")."%'";
 
				$query = $dbh -> prepare($sql);
 
				$query->execute();
 
				if ($query->rowCount() > 0){ #rowCount() won't work on some databases
 
					$values = $query->fetch(PDO::FETCH_ASSOC);
 
					while (list($key, $value) = each($values)){
 
						$video_list[]=$value;
 
					}
 
				}
 
$key = file_get_contents($key_file);
 
if (count($video_list)>0){
 
foreach($video_list as $VIDEO_ID){
 
	$VIDEO_ID = str_replace("https://youtube.com/watch?v=", "", $VIDEO_ID);
 
	$VIDEO_ID = str_replace("https://youtu.be/", "", $VIDEO_ID);
 

 
#Sample $VIDEO_ID can be "gYY3fVz6PjY";
 
/**
 
 * This sample adds new tags to a YouTube video by:
 
 *
 
 * 1. Retrieving the video resource by calling the "youtube.videos.list" method
 
 *    and setting the "id" parameter
 
 * 2. Appending new tags to the video resource's snippet.tags[] list
 
 * 3. Updating the video resource by calling the youtube.videos.update method.
 
 *
 
 * @author Ibrahim Ulukaya
 
*/
 

 
// Call set_include_path() as needed to point to your client library.
 
#Download the PHP Client Library from Google at https://developers.google.com/api-client-library/php/
 

 
#This has been installed using Composer - update if you download the files directly
 
set_include_path(get_include_path() . PATH_SEPARATOR . '/PATH/TO/vendor/google/apiclient/src/');
 
    
 
require_once 'Google/Client.php';
 
require_once 'Google/Service/YouTube.php';
 
session_start();
 

 
/*
 
 * You can acquire an OAuth 2.0 client ID and client secret from the
 
 * Google Developers Console <https://console.developers.google.com/>
 
 * For more information about using OAuth 2.0 to access Google APIs, please see:
 
 * <https://developers.google.com/youtube/v3/guides/authentication>
 
 * Please ensure that you have enabled the YouTube Data API for your project.
 
 */
 
$OAUTH2_CLIENT_ID = $CLIENT_ID;
 
$OAUTH2_CLIENT_SECRET = $CLIENT_SECRET;
 

 
$client = new Google_Client();
 
$client->setClientId($OAUTH2_CLIENT_ID);
 
$client->setClientSecret($OAUTH2_CLIENT_SECRET);
 
$client->setScopes('https://www.googleapis.com/auth/youtube');
 

 
#$redirect = filter_var('http://' . $_SERVER['HTTP_HOST'] . $_SERVER['PHP_SELF'], FILTER_SANITIZE_URL);
 
# If running via Cron, HTTP_HOST may be blank
 
$redirect = filter_var('http://YOUR_URL/' . $_SERVER['PHP_SELF'], FILTER_SANITIZE_URL);
 
$client->setRedirectUri($redirect);
 

 
$scope=array("https://www.googleapis.com/auth/youtube", "https://www.googleapis.com/auth/youtubepartner", "https://www.googleapis.com/auth/youtube.forcessl");
 

 
// Define an object that will be used to make all API requests.
 

 

 
#if (isset($_GET['code'])) {
 
#  if (strval($_SESSION['state']) !== strval($_GET['state'])) {
 
#    die('The session state did not match.');
 
#  }
 
#
 
#  $client->authenticate($_GET['code']);
 
#  $_SESSION['token'] = $client->getAccessToken();
 
#  header('Location: ' . $redirect);
 
#}
 
#
 
#if (isset($_SESSION['token'])) {
 
#  $client->setAccessToken($_SESSION['token']);
 
#}
 
$client_id = $CLIENT_ID;
 
$client_secret = $CLIENT_SECRET;
 
#var_dump($key);
 

 
  $client = new Google_Client();
 
    $client->setApplicationName($application_name);
 
    $client->setClientId($client_id);
 
    $client->setAccessType('offline');
 
    $client->setAccessToken($key);
 
    $client->setScopes($scope);
 
    $client->setClientSecret($client_secret);
 

 
// Check to ensure that the access token was successfully acquired.
 
if ($client->getAccessToken()) {
 
/**
 
         * Check to see if our access token has expired. If so, get a new one and save it to file for future use.
 
         */
 
        if($client->isAccessTokenExpired()) {
 
            $newToken = json_decode($client->getAccessToken());
 
            $client->refreshToken($newToken->refresh_token);
 
		#This is for debugging if your token is not regenerated
 
	    #var_dump($client->getAccessToken());
 
            file_put_contents($key_file, $client->getAccessToken());
 
        }
 

 
$youtube = new Google_Service_YouTube($client);
 

 
  try{
 

 
    // REPLACE this value with the video ID of the video being updated.
 
    $videoId = $VIDEO_ID;
 

 
    // Call the API's videos.list method to retrieve the video resource.
 
    $listResponse = $youtube->videos->listVideos("status", array('id'=>$videoId));
 

 
#	array( 'id' => $VIDEO_ID, 'status' => array('privacyStatus' => 'public')));
 

 
    // If $listResponse is empty, the specified video was not found.
 
    if (empty($listResponse)) {
 
      $htmlBody .= sprintf('<h3>Can't find a video with video id: %s</h3>', $videoId);
 
    } else {
 
      // Since the request specified a video ID, the response only
 
      // contains one video resource.
 
      $video = $listResponse[0];
 
	$videoStatus = $video['status'];
 
	$videoStatus->privacyStatus = 'public'; #privacyStatus options are public, private, and unlisted
 
	$video->setStatus($videoStatus);
 
	$updateResponse = $youtube->videos->update('status', $video);
 

 

 
#    $htmlBody .= "<h3>Video Updated</h3><ul>";
 
#    $htmlBody .= sprintf('<li>Tags "%s" and "%s" added for video %s (%s) </li>',
 
#        array_pop($responseTags), array_pop($responseTags),
 
#        $videoId, $video['snippet']['title']);
 
#    $htmlBody .= '</ul>';
 
$htmlBody = "We're Good!"; #Just a debug phrase to know that the script completed successfully. Not required to output
 

 
  }
 
    } catch (Google_Service_Exception $e) {
 
      $htmlBody .= sprintf('<p>A service error occurred: <code>%s</code></p>',
 
          htmlspecialchars($e->getMessage()));
 
    } catch (Google_Exception $e) {
 
      $htmlBody .= sprintf('<p>An client error occurred: <code>%s</code></p>',
 
          htmlspecialchars($e->getMessage()));
 
    }
 

 
    $_SESSION['token'] = $client->getAccessToken();
 
    } else {
 
      // If the user hasn't authorized the app, initiate the OAuth flow
 
      $state = mt_rand();
 
      $client->setState($state);
 
      $_SESSION['state'] = $state;
 

 
      $authUrl = $client->createAuthUrl();
 
      $htmlBody = <<<END
 
  <h3>Authorization Required</h3>
 
  <p>You need to <a href="$authUrl">authorize access</a> before proceeding.<p>
 
END;
 
    }
 
#      echo "<body>$htmlBody</body>";
 
}}
 
	?>
 

Again, Here's the Github repository)


Tags:#php #mysql #japanoblog #video #youtube #api #oauth

RSS Feed

Backup and Restore All MySQL Databases

I routinely make mass backups of all of my MySQL databases, but sometimes forget the syntax when doing so. Instead of creating a script to do it (which I will do in the future), I have to Google the syntax to find out what it is. Most of the time, I can only find one or the other, so I thought I'd gather these two commands here and hopefully provide a better reference for anyone else searching.

Making a Full Backup (All Databases)


Using a Linux Terminal:
$ mysqldump -u [username] -p[password] --all-databases > [filename].sql
notice that there is no space between -p and [password]

Restoring a Full Backup (All Databases)

Using a Linux Terminal:
$ mysql -u [username] -p[password] < [filename].sql
notice that there is no space between -p and [password] Hope this helps others that scour the internet looking for these two common commands.


Tags:#mysql #linux #backup #restore

RSS Feed

I had a case of the Tuesdays

I know it's Wednesday, but this first portion deals with YESTERDAY. Deal with it. After I posted yesterday's entry, Keat and I went home. Nothing super special, but it started raining. Hard. We went to Ingles to pick up some dinner, then got back out to the car, and the stupid car wouldn't start! Since the RKE doesn't respond anymore, we have to open the car with the key, which causes a mini-alarm to go off for a bit. After about 10-15 seconds, the full-on "HEY! I'M BEING STOLEN!" alarm goes off. Which it did. For 10 minutes. The stupid key wouldn't turn in the ignition. So here we are, holding groceries, in the hard, wet rain, trying to either disconnect the car battery and/or start the car. The car finally started and we were off. Soaked, but wet.

ANYWAY, there were a few comics in the past day that I liked, so I'll be posting them throughout the week. Here's the first one.
The System

Haven't posted a System comic in a while, but what the hey - I thought I'd do it.

On to more technical stuff(s), I had an idea for a developer-friendly MySQL error notifier, since mysql_error() only works on the front end. My proof-of-concept that I developed yesterday really worked! And I'm so happy! Here it is for anybody to use.
#notify developer(s) of MySQL Errors
 
#use: $result=mysql_query($sql) or die("Oops!".mysql_dev_error($sql));
 
#(c) 2011 - Shadow Development [http://shadowdev.com]
 
if (!function_exists(mysql_dev_error)){
 
function mysql_dev_error($sql){
 
        #get the php-generated mysql error
 
        $error=mysql_error(); 
 

 
        #get the database name
 
        $db_q=mysql_query("SELECT DATABASE()"); 
 
        list($db) = mysql_fetch_array($db_q);
 

 
        #get the top-level domain along with the page the 
 
        ## query is being executed on
 
        $page=$_SERVER['SERVER_NAME'].$_SERVER['PHP_SELF']; 
 

 
        #send HTML email
 
        $headers  = 'MIME-Version: 1.0' . "
 
";
 
        $headers .= 'Content-type: text/html; charset=iso-8859-1';
 
        $headers .= "
 
";
 

 
        #generate message
 
        $message ="SQL Query:
$sql

"; $message.="Database:
$db

"; $message.="Error:
$error

"; $message.="Page: $page"; #send off mail("DEVELOPER_EMAIL", "MySQL Error for ". $_SERVER['SERVER_NAME'], $message, $headers); }}

BTW: Creative Commons License
MySQL Developer Error by Kyle "KP" Perkins - Shadow Development is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

I'm really surprised how well it works for just a proof-of-concept. I told Keat last night that I must be getting really good at this "programming" thing. I used to think of an idea, write it down, write the code in 15 minutes, then spend 2 weeks debugging it. Now, my 15-minute code sessions work like a charm from the get-go. I must be doing something right....

Here's the overview of what I got done today:
  • I got some minor edits done to the Shadow Dev Beta blog (added a side navigation like the on one this blog; had to make my own URL shortener for some URL-based comments).

  • Made some additions and edits to the Receipt Rescue.

  • Finally sent out May's newsletter and followups (I'm only 3 days late!).

  • Contacted a potential client to find out that my idea of a "potential client" turned into "We need an editor" - no fun.

  • Made a minor adjustment to my Cron job from last week (decided to have the IP addresses added to a DB log and see which ones were repeats - so far, it's blocked 29 addresses in just over a week; that's about an hour of free time I got back!).

  • Finished Rocko's Modern Life on Netflix (first 3 seasons were good, the last season got a little weak, like most final seasons do).

  • Added a Microtimer to the Main and Beta Shadow Dev sites to see the page generation time.

  • Found out about EchoSign.com - an electronic PDF signer for clients. Not a bad idea, and I may try it in the future.

...and that's about it for now. Till tomorrow!


Tags:#thesystem #php #creativecommons #mysql #car #alarm

RSS Feed

Snow Day? Snow Way!

xkcd
(Snow comic for the Snow day)


Sorry for the bad pun, but that's the way it seemed today. We saw that it was snowing last night, so Keat got up a little early and found out that classes were on a 2-hour delayed schedule. So what did we do: we slept for 2 more hours. We finally got up at around 9:30 and got our things together to leave. I didn't want to get out of the comfy, thick, warm covers. I thought "the world can end before I'll get out," but Keat had to get to class.

There was snow on the ground, and more falling every minute. My hands were about to freeze off. After working up the courage to brave the cold, we went to my car only to find out that my doors were frozen shut. What luck. I went inside and got a pitcher of water to pour on the ice. I went back outside, and Keat had the passenger doors open. I guess that side was facing the sun. I focused back on the pitcher, and poured it on the driver-side windows and doors. After some wiggling, the doors finally opened and I started up the car. It was a little squeaky due to the cold, but we made it to HCC.

I dropped Keat off at class and I went to the Mill Pond to take some pictures of a snow-covered mill house on a pristine lake. What a picturesque moment! Too bad my phone doesn't take very good long-distance photos. I was driving to the office when Keat called me saying her class was over. Apparently, the teacher was just collecting papers. I went back to her and dropped some more stuff off. Then, I went to the office again.

I caught up on my email, RSS feeder, and other related interests. A few main things I wanted to take care of were the Uptime status report on the ShadowDev.com site, "Latest Comments" on the blog, and a few other items.

According to Pingdom, the new server has a surprisingly decreased downtime as compared to the old server. The report said that the old server had a 99% uptime ratings, and so far, the new server has a 74% uptime. So much for the guarantee...

I found out that there were a few things that didn't transfer from the old server to the new server, including some of the recent changes I made to the blog. I played "code catchup" for a few things (including re-referencing the format_link alrorithym from the old server to the new; apparently, the new server couldn't find the right file and couldn't parse the functions, so the Twitterfeed fetcher and RSS maker wouldn't work), then worked on the "Latest Comments" section, which took the most time. I put in the following code to retrieve the title of the responded entry the comment was left for:
$sql="select `title` from `journal` where id=$id";
 
list($title)=mysql_query($sql) or die("Error 30: ".mysql_error());
 
echo $title;

and ran the code. It didn't turn up what I expected. As a matter of fact, it didn't turn up anything! The source code, the output buffer, the error reporting...all were blank. I spent the next hour trying different things to figure out the issue. I finally gave up and referenced the PHP online documentation. What I found made me felt stupid. I had to add one line to the above code for it to work properly:
$sql="select `title` from `journal` where id=$id";
 
$result=mysql_query($sql) or die("Error 30: ".mysql_error());
 
list($title) = mysql_fetch_array($result);
 
echo $title;

Wow, did I feel stupid. I put that in, and it worked like a charm. Only then did I find out that there were 4 spam-based comments that I had to moderate.

Keat got out of class, we had lunch, then back to the office. We also found out that the next time our neighbors' dog starts being mistreated, we can call the Asheville Police and file a complaint, which allows the landlords to file a violation as well. This is great because now we have a plan of action instead of wildly guessing our next step.

Anyway, while she worked and took a nap, I watch Danny Phantom, Season 3 (which I have not seen before) and worked on the Expo report (which is due tonight) and the VIM coloration issue. On the old server, the VIM editor would automatically add colors to the proper code segments while editing. The new server didn't support this, and it was getting very confusing when I would edit files. After searching Google for about 10 minutes, nothing turned up. Apparently, I was calling it by the wrong name and should have been searching for "Syntax Highlighting." Long story short: the VIM version I had (7.0) was compiled with the TINY option, which is basically a minimalized installiation. I tried to re-compile it, but the configure file was missing. So, I tried to update it via yum, but yum said it was up-to-date. The latest version was 7.3, so I knew something was off. I downloaded and compiled VIM 7.3 and compiled it with everything under the sun. Therefore, I was guarenteed to get the Syntax higlighting I so wanted. After some initial testing, I also found out that the command vi was defaulted as a minimalistic editor (7.0), while vim is the full-fledge editor (7.3) with syntax highlighting included. So I fixed a few bugs and have the most up-to-date version for editing, along with a backup for emergency fixing. No harm done.

I then decided to check Facebook (for the heck of it), and while trying to get used to their new profile layout, saw that a friend of ours was having some trouble on the snowy roads in Asheville. I asked her to keep us updated, becasue we still have to come home. We had a small discussion and she posted photos of the snowy roads. After I saw the photos, all I thought was "Aw crap." Based on her reports, the roads were snowy and icy, and people were sliding all over the place. And me without my current car insurance card (it's in the mail and should be here within the next few days). Keat and I decided to try to find her teacher so she could possibly miss class if need be. We frantically searched the traffic reports and saw that I-40W was slow and I-40E was clear. However, that didn't say how the road conditions were. I was thinking back to last year when Keat and I had to walk to the nearest working grocery store, and how I didn't want to drive through that.

We drove to her school and tried to find her teacher. The roads weren't that bad (although that was a 5 mintue drive). After searching and waiting for about 45 minutes, the teacher finally arrived and wasn't too sure about class. We then decided to gain the courage and drive home. Keat didn't want to stay the night at the office.

We got on the highway and I followed a truck for about 10 miles. Thinking the roads got worse the further we got near the city, I was mentally prepared to face the icy roads. We finally passed the truck I was following since the roads weren't that bad (just a little snow dusting), and kicked it up the rest of the way home. Turns out the roads in West Asheville weren't bad at all, just had a little dusting. The roads in South Asheville had the brute of the storm and had icy roads. Luckily, we weren't in that area.

We got home, took care of the kitties, and Keat worked on a presentation and paper she had due. My laptop was used to catch up on our shows, and I played PS2 while I waited.

Chi's still in heat and it's driving us nuts! We are going to call the spaying service and get an appointment asap!

The temperature is in the mid-teens (with a wind chill of 3!) and there is a light flurry outside. This isn't right for NC! I just wonder how things will be different tomorrow with the class schedule. I still have to get the Expo report done and send it in before the SMDC meeting on the 8th. Wish me luck!


Tags:#xkcd #snow #class #pingdom #downtime #mysql #php #yum #vim

RSS Feed

Bad day? That's okay! I have GENIES!!!

More Genies? Why not?
(edited from the original for audiences - view original)

Today seemed to start off good, but then it went down the hole.....fast. And it seemed like I had to build my own ladder to get out of the hole.

Anyway, onto the details: I woke up at about 10:30ish, because Keat had class at 11:30. After writing last night's entry, I wanted to sleep for as long as I could. However, after falling asleep to Futurama and hearing the "Anthology of Interest II" episode (the one where Leela finds her "true home" in a certain film containing lollipop children, a brain-less scarecrow, a squeaky 1930's android, an ironic lion, and an omnipotent wizard), and I realized, "If the Wicked Witch of the West melted with just a small amount of liquid, then does that mean that she never drank anything or ever took a shower? I mean, I can understand the 'no shower' thing because she's a wicked witch and that would make her ugly, but never showering? That's just down right repulsive!"

Anyway, Keat was dropped off at school, and I went to the office. I knew that the server transfer was taking place, and I really want to thank the support guy I'm working with at my hosting company. He's helping me out through the transistion from RHEL 3AS (cPanel) to CentOS 5 (Plesk). I thought that it would be a lot of work to transfer the files over, but I didn't realize it would take this much work (I'll get into one bugging detail at the end).

As I drove to the office, I received a call from Allan, the Executive Director of the building. I couldn't get to the phone before it went to voicemail, but he said that his computer wasn't connecting to the internet and his computer was "fading out." Wasn't quite sure what that meant, but I was only 5 minutes from the office.

I arrived and jumped straight into work. I opened the office door, placed my laptop bag on my chair, and went right into the server room to check the primary connection. It was working fine. So....what's up with Allan's computer?

I went back to my office and set up my laptop. After going through my daily emails and my DDN (or RSS as most know it, but I call it my "Daily Digital Newspaper"), I jumped back into the server transition. It seemed kind of slow since only 1 domain was being transfered at a time. Apparently, the MySQL databases weren't being moved until I said so. 1 task down.

I had a to-do list from a client from yesterday, so I took care of that while I had the chance. 2 tasks down.

After about 2 hours, I had an issue with the MySQL database transfer and credentials. I asked the support guy, and he fixed most of the issue. However, I found out that they want each site to have its own unique login to the database instead of 1 generic login. I went with 1 generic login for the longest time because it was quick. Security wise - that is a large risk. Now I know that Q&D (quick and dirty) isn't the best way to go.

After a while, I realized: "Hey, when they are transferring the shadowdev.com domain, my email system will be down!" So I sent an informative email to the support guy with my backup email address.

After a LONGER while, I was wondering what was taking so long. I hadn't heard anything from my support guy since 1pm (EST) and was wondering how things were going. I went to go pick up Keat at school, got some quick lunch (needed to get back in case there were more server issues), and got back to the office. Guess what: the internet goes out. I started to get furious! THIS IS JUST WHAT I NEEDED! Here I am trying to oversee a sensitive server transition, and the freakin' internet just went out. What a day.

I went to the main server room thinking the problem was local. Nope, the main server wouldn't connect either. Therefore, the problem was on our ISP's end. I tried to load the community college's website on my phone for a contact number (they're our ISP), only to find out that their main site is down. Great. So if their site is down, then the whole county is down. I called someone I knew over there only to get voicemail. I thought "If their phones are on a VoIP system, and the network is down, then their phones are out." Just great. I called again after 10 minutes just because I could, and got the person I was looking for. She let me know that the main internet supplier in the whole region had a fiber link cut and they didn't know how long it would take to fix. That was 3pm. I'm online now, so I hope they fixed it if the office and home use the same artery for the connection.

Keat asked me a few marketing budget questions while we were waiting, and then (after leaving an informational letter) we left for Keat's oil appointment. We pulled into the station to have her car serviced and went for a stroll downtown. Stopped by the library, then had a chocolate malted and a Cheerwine at the Woolworth's Sandwich bar in downtown. Add a Turkey sandwich, and that's good livin' right 'der.

We came home and I took a little nap. Got up, responded to some emails and checked by DDN, then did some laundry. However (here's the detail) I found out that the main reason why the server transistion is taking so long is because the server support guy is going through each of my database configuration files and changing the information to the site-specific login. All 11GB of files.

Honestly, I would have been happy if the files, databases, and domains were transfered as they are, and I would take care of the relative and absolute file location updates and database privileges. That's how I expected to spend Thanksgiving: updating server files.

However, the server support guy said that since so many sites depended on the main shadowdev.com domain, he was going to wait to transfer that until Friday or Saturday. One problem: most of the sites that have a news feature use a centralized function for parsing content links and link-like information - and that's in the main shadowdev.com files. Without the shadowdev.com files, the sites with news features can't parse the link-related content properly. It's going to be a long break.

But I found two things that cheered me up today. 1: the comic above. I can't believe nobody thought (and published) of it before! It all makes sense! and 2: Keat and I were finishing up our chocolate malt, and she stood up to go to the restroom. She put her phone on the table and said "You hold on to it if the car guys call." The moment she places it on the table, "RING RING RING". That was something you couldn't time if you had to do it again. She answered, her car was ready, and she ran to the restroom. She and I had a fun hypothetical conversation after that. Her: "HELLO! YOUHAVETHEWORSTTIMINGINTHEWORLD!" / Car guy: "Your car is ready." / Her: "THANK YOU!" That made me laugh.


Edit: 2010-11-23 22:14:32 Forgot one thing. This photo made me smile as well. Thought I'd share it around.


Enough for today. Time for a whatever-we-can-cook-up dinner. Then, off to Thanksgiving....yay.


Tags:#thanksgiving #cyanideandhappiness #genie #car #server #parse #oz #futurama #centos #internet #mysql

RSS Feed

Wednesday is here, now it's gone...

I thought I'd go ahead and get this post out of the way before I forgot tonight (knowing how busy I'll be with cleaning the apartment for "Inspection" on the 30th). You may have also noticed that I'm including comics in the posts. I'm doing this not because I can draw (really, I can't), but these are some of my favorite comics from various sources. They may have something to do with the content of the post, they may not. It depends on what I find that day.
Pearls Before Swine - May 23, 2010

After I posted this morning's post, thing got crazy. First, the web server suffered from a MySQL hiccup. I couldn't get anything to load or edited, which got really annoying, especially since a lot of the sites I create run from our main MySQL database server. After multiple attempts to get it restarted (and stay ON), I sent an email off to my hosting provider, and they said "We restarted it, try it now." I did that, and same thing. Then it hit me: I'm doing a massive download of the main server to a backup server for the hosting server changeover. Could the FTP requests be hogging the sockets and denying MySQL the sockets it needs to access the page information? Maybe. So I slowed down the FTP service I had from 10 files at a time to 3, and limited the download speed. After about 15 minutes, no more MySQL errors. Crisis averted.....for now.

For the majority of the day, I worked on the Blog design and features. The main things I changed were the background (like it?), moved the Social Network features to the top, automatic syntax highlighting for source code, and the toggling (togglation?) of the archive listings (that took me all day). jQuery is certainly being challenging, but I think I'm getting the hang of the basics.

I also learned that the loan that we applied for (won't say though who) was turn down. This isn't the first time, but I'm a grown person, and instead of whining to some random Internet reader or forum list (or to the person's/committee's face), I'm going to say "Ok, thanks for the opportunity. What can I do to improve the business so I can reapply for the loan?" We'll see where it goes from there.

I'm still working on the "Projects" tab, and that should be up by next week (hopefully). I also worked on the F&I site, unifying the Ticket section along with writing a news which should help in the PR/SM department.

This is actually the first day in a month that I've worked without a TV show in the background. Exactly 1 month from yesterday, I started watching Heroes on Netflix and finished 16 days later. The next day, I started Eureka and finished yesterday. Today, I just listened to music on Pandora and worked. It's amazing what a non-distraction workplace can do for the attention span.

I was also asked why I am doing this blog. The purpose is actually 3-fold: #1) to provide family members with updates to what I am doing and the progress I have for our advisors/project owners, which mirrors #2) update our advisors and project owners on the status and progression of their projects, and #3) give myself a personal log to track what I've done over time and where I am going (just in case I forget - say, over a long vacation). I'm also using the Blog as a personal sandbox where I can test and refine new techniques and functions without having to mess up other sites. This way, I can show off what I know and nobody's site will go down because of it........I hope.

I had some criticism about the content of the blog, saying that it was not relevant to some of the viewers, but I want to assure you (generically) that this is more for the advisors and myself. If I include too much jargon in a post, please either let me know or Google it. It would be really stressful to maintain 3 separate blogs (if that many) to update the individual audiences, so I'm going to try to create a function which can extract the appropriate information per audience depending on what is searched. Maybe that's the wrong direction? I won't know until I get responses.

Since the Blog design is basically finished, I'm going to go back to working on the Accelerator. I have a new business idea in mind that I'm really excited about, but I'm not going to say anything until the idea is ready to go public (news-wise, not IPO).


Tags:#heroes #eureka #jquery #netflix #pandora #mysql #ftp #pearlsbeforeswine