Migrating WordPress site from a legacy hosting provider to AWS

Up until now my blog (lucaslouca.com) was hosted on a traditional/old fashioned hosting provider, Their service provided a fixed 10GB of hosting storage together with FTP access and a cPanel.

So here are the reasons why I switched over to AWS:
– Less expensive
– More flexible due to the fact that you can launch unlimited virtual servers, database instances, configure firewalls, load balancers, etc
– More security: my database isn’t publicly available anymore and sits in a VPC, better user management
– Literally thousands of services from CloudWatch to Lambda

Overall using AWS you have more control over your web applications.

So lets get started with the migration process!

We are going to do the following:
• Setup an EC2 instance using a Basic 64-bit Amazon Linux AMI
• Setup a S3 bucket
• Setup a MySQL RDS Instance
• Migrate old MariaDB database to new AWS DB Instance
• Install Apache Web Server, MySql, PHP, Git and Python on EC2 instance
• Install WordPress on EC2 instance
• Issue a Let’s Encrypt certificate
• Configure Apache Web Server on EC2 for HTTPS

Create an S3 bucket
We will first create an S3 bucket that we can use to store any database dumps and files so we can access them through our new EC2 instance.

To create a new instance, access the AWS Management Console and click the S3 tab. Create a new bucket and give it a name. Mine is named lucaslouca.com-wordpress.

Create an EC2 instance
To create a new instance, access the AWS Management Console and click the EC2 tab:

  • Choose an AMI in the classic instance wizard: I chose the Basic 64-bit Amazon Linux AMI.
  • Instance details: Select the Instance Type you want to use. I chose t2.micro.
  • Create a new key pair. Enter a name for your key pair (i.e. lucasloucacom) and download your key pair (i.e. lucasloucacom.pem).
  • Make sure you create a new security group, give it a name (e.g. lucaslouca.com-security-group) and add inbound rules for SSH, HTTP and HTTPS that allow traffic from all sources.
  • Launch your instance.

Note:For security purposes you can edit the inbound rule for SSH to allow traffic only from your IP address.

Map IP Address and Domain Name
Your EC2 instance has an IP address as well as a DNS name. However, the default IP address is assigned dynamically and might change. You will keep the IP address as long as the instance is running and across reboots, but if you are forced to stop/start or anything like that you will lose it. If you have a domain name pointing to your instance, that is a bad thing. Thats why we need to associate an IP address to our instance and then map your domain name to that IP address.

To associate an IP address to your instance:
In the AWS Management Console, click Elastic IPs (left navigation bar). Click Allocate New Address, and confirm by clicking the Yes, Allocate button.
Select the newly allocated IP address and select Actions -> Associate address in the popup menu. Select the EC2 instance and click Yes, Associate

Note down the new Public DNS (e.g. ec2-35-158-16-195.eu-central-1.compute.amazonaws.com) of our EC2 instance. We will need it later.

Then, go to Route 53-> Hosted zones-> Create Hosted Zone. Amazon will list you four NS servers:

Note them down and login to your domain registrar. Under the DNS Manager delete the NS entries and any A records pointing to your web server. In my domain registrar I have deleted the NS and A entries and edited the names servers for my domain lucaslouca.com to point to the Route 53 provided name servers instead of my hosting providers defaults.

Back in the AWS console go to Route 53-> Hosted zones and select your newly created zone. Click on Create Record Set to setup some A records. Leave Alias to No and paste the elastic IP address of your EC2 instance into the value field. This will create a new A record pointing the domain name lucaslouca.com to the IP 35.158.16.195. You can repeat this for any subdomains also. For example you can click on Create Record Set again and under Name enter www, Alias set to Yes and from the Alias Target target you can select the previously created record (e.g. lucaslouca.com). That way www.lucaslouca.com will also point to the same IP as lucaslouca.com.

Install Updates, Apache Web Server, MySql, PHP, etc
Once the instance is up and running go ahead and ssh to your EC2 instance:

In order to install the required tools and updates run the following commands:

Create a DB instance
Next, we are going to need a MySQL database for our blog.

To create a new instance, access the AWS Management Console and click the Database -> RDS tab. Then click Launch a DB Instance and select MySQL.

Then specify a username, password and database name and make sure Publicly Accessible is set to NO. Make sure you also create a new security group.

Once the instance is created, select it and under Configuration Details click on the security group (e.g. rds-launch-wizard-3). Edit the inbound rules for that group and set the Source for the MYSQL/Aurora type to the security group you created earlier for your EC2 instance (e.g. lucaslouca.com-security-group). This will allow only traffic in that comes from the security group lucaslouca.com-security-group. In other words only traffic that comes from our EC2 web server is allowed in.

Once created note down the endpoint address (e.g. lucaslouca-com-wordpress-db.shsfahjkiahd.eu-central-1.rds.amazonaws.com:3306). You are going to need it later.

Install WordPress
Back in your EC2 shell:

Finaly, adjust wp-config.php to your database settings (i.e. username, passowrd, database name, host) that you created earlier.

By now you should be able to access your newly installed WordPress blog via http://ec2-35-158-16-195.eu-central-1.compute.amazonaws.com.

Export database from old blog
If your hosting server is as crapy as mine and your only option is to use phpMyAdmin then go ahead and login into your phpMyAdmin site. Then, select your blog database from the left side-bar and then go to Export. Select the Custom - display all possible options option and make sure you check the Add DROP TABLE / VIEW / PROCEDURE / FUNCTION setting. This will drop (delete) the table if it exists and recreate it in the database you are importing it to. For Format select SQL. and click Go.

Once you have your OLD_DB.sql go ahead and upload it to your S3 Bucket.

Backup wp-content from old blog
You also want to migrate any existing themes, plugins etc to your new AWS hosted blog. For that you need to migrate the contents of wp-content from your old blog into your newly installed wordpress blog. So for that you need to connect to your old hosting provider via FTP, navigate to your blog directory and download the entire wp-content directory. Archive it using:

Then go ahead and upload wp-content.tar to your S3 Bucket.

Create a new AMI access role for your EC2 instance
We need to be able to access our S3 bucket through our EC2 instance, so we can download our backed-up database and wp-content files. For that you need to login into your AWS console and navigate to your EC2 instance.

Select your EC2 instance and under Actions select Instance Settings -> Attach/Replace IAM role. From there you can select an existing role or create a new one by clicking Create new IAM role. Alternatively you can create a new role by from Services -> Security, Identity & Compliance -> IAM -> Roles -> Create new role.

So go ahead and create a new role providing it the policy AmazonS3FullAccess and attach the new role to your EC2 instance.

Access S3 from EC2
OK, once you have attached an IAM role with AmazonS3FullAccess policy to your EC2 instance you can go ahead and ssh to your EC2 instance:

Once you are logged in create a directory to sync our S3 bucket into:

Then try and sync the S3 bucket using:

In order to get the correct region type:

Then configure your aws client tot use it as the default region:

Verify your configuration using:

And then go ahead and try and sync your S3 bucket again:

Your files are now accessible under lucaslouca.com-wordpress/

Import old database into AWS hosted database
Go ahead and ssh to your EC2 instance:

And import your old database into your AWS RDS database:

Linking to new URL and defining new domain
Next you need to update any existing old URLs in your db to your new AWS URL. For that I temporary made my DB Instance Publicly Accessible and edited the security group’s inbound rules to allow any source (i.e. 0.0.0.0/0). That way I could access my db through a nice GUI SQL client (I used Sequel Pro) and run the following SQL:

Of course you could again ssh into your EC2 and use the mysql command.

Note: Once you are done do not forget to disable Publicly Accessible and remove the added inbound rules from the security group.

WordPress pretty permalinks on Amazon EC2 Linux instance
You will need to edit /etc/httpd/conf/httpd.conf and change AllowOverride None to AllowOverride All. So it should look like so:

Note: AllowOverride None appears two times in /etc/httpd/conf/httpd.conf and you need to change it in all cases.

Then, navigate to /var/www/html/blog and create an .htaccess file that looks like so:

Also create a .htaccess file under /var/www/html/ to route lucaslouca.com/ to lucaslouca.com/blog

This will transparently redirect all requests to /blog/{requested_resource}. If you have other subfolders that need to be excluded from this redirect you can just an .htaccess file in those directories saying:

For example, I wanted to leave https://lucaslouca.com/bargain and it’s content alone completely so I have just added such an .htaccess file under /var/www/html/bargain.

Finally, you then need to restart apache:

You should be able to view your old posts etc under the new host http://ec2-35-158-16-195.eu-central-1.compute.amazonaws.com.

Updated WordPress site URLs
In the previous section we updated out database entries to point to our ec2-35-158-16-195.eu-central-1.compute.amazonaws.com domain. That way we could continue working until the NS and A records are updated. We can run the sql script again to point to our domain name (http://lucaslouca.comNote: http and not https).

You may need to temporary make your DB Instance Publicly Accessible and edit the security group’s inbound rules to allow any source (i.e. 0.0.0.0/0). That way you can access your db through a nice GUI SQL client (I used Sequel Pro) and run the following SQL:

Enable HTTPS
We already enabled any HTTPS traffic in our lucaslouca.com-security-group. We now need to configure our Apache server to also listen on port 443 in order for us to be able to access our blog via HTTPS. For that we first need to obtain a CA-signed certificate. We will use Let’s Encrypt for our CA.

Add the following to your /etc/httpd/conf/httpd.conf:

Install python and git on your EC2:

Get the letsencrypt client:

Create a config file that will be used for new certificates and renewals. It contains the private key size and your email address.

Important: Before you run letsencrypt temporary disable /var/www/html/.htaccess:

Run letsencrypt:

Note: If you are trying to letsencrypt your AWS domain (e.g. ec2-35-158-16-195.eu-central-1.compute.amazonaws.com) and you are getting an error its because amazonaws.com happens to be on the blacklist of Let’s Encrypt.

Enable /var/www/html/.htaccess and clean up:

The certificates are located at /etc/letsencrypt/live/ and the last thing is to update your webserver’s configuration. So edit your /etc/httpd/conf.d/ssl.conf file:

Restart apache:

Try and access https://lucaslouca.com. It should work!

Finally, add the renew command in a crontab. Refresing your webserver command should also be here.

Updated WordPress site URLs
In the previous section we updated out database entries to point to our http://lucaslouca.com domain. Now that we got HTTPS up and running we can update our WordPress site URLs to point to HTTPS.

You may need to temporary make your DB Instance Publicly Accessible and edit the security group’s inbound rules to allow any source (i.e. 0.0.0.0/0). That way you can access your db through a nice GUI SQL client and run the following SQL:

Note: Once you are done do not forget to disable Publicly Accessible for your db instance and remove the added inbound rules from the security group.

Give Apache apache access to the folders
An issue still exists when you try to update/install plugins etc. The issue is that apache does not have access to the folders. The default permission is given to the ec2-user in the AMI.

Run this in your EC2 shell and you should be good to go:

Thats it guys!!

 

How to setup Let’s Encrypt (SSL) Certificate on OpenShift

In previous posts I have described how to deploy a Node.js application to OpenShift. Now its time to add a custom alias to our Node.js application so that it is accessible through a custom domain, like test.testnode.com. Currently it is accessible only through testnode-lukesnode.rhcloud.com. Off course we also want valid SSL certificates for our custom domain testnode.com. For that we need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a free, automated, and open certificate authority brought to you by the non-profit Internet Security Research Group (ISRG). So obviously we will use that.

Create alias via OpenShift web console
From the Applications section choose your application (e.g. testnode) and then click on change alias. For Domain Name enter your custom domain. Mine is test.testnode.com. Leave the rest of the fields blank and click Save.

To successfully use this alias, you must have an active CNAME record with your DNS provider. The alias is test.testnode.com and the destination app is testnode-lukesnode.rhcloud.com.

My provider is united-domains.de. So I went ahead, logged in and under Subdomains -> New Sub Domain I have created a new subdomain test.testnode.com. Then under DNS Configuration for test.testnode.com, I was able to set the CNAME record to rtcrandom-lukesnode.rhcloud.com for *.test.testnode.com (test.testnode.com included).

And thats it!

Create certificates
We will need a valid certificate and its corresponding private key to upload to OpenShift for the new domain test.testnode.com. Under Mac OS X I have used certbot. So go ahead and install certbot:

Once installed run:

OK, that didn’t work. Obviously Let’s Encrypt wants us to prove that we are the truthful owners of test.testnode.com. The way it verifies ownership is trying to load the above URL (http://test.testnode.com/.well-known/acme-challenge/p1zEUvrrpAuTgj-b1bBk0zt9ypOn-BeLJWmxDi2xWXQ) and compare the received result with the expected result.

We need to modify our Node.js application to return the hash Let’s Encrypt requires when the above URL is GET. In my router.js I have added the below code snippet:

The above code reads the hash from the requested URL and returns it. OK, lets try it one more time.

Hmmmm, that didn’t work either. At this point I should probably read the manual. But apparently when the URL http://test.testnode.com/.well-known/acme-challenge/xxxxxxxxxxx is requested, it expects xxxxxxxxxxx.yyyyyyyyyyy as a result. So I went and modified my router.js again:

Giving it a try again, I finally got my certificates:

The generated certificate and private key is located under /etc/letsencrypt/archive/test.testnode.comc/cert1.pem and /etc/letsencrypt/archive/test.testnode.com/privkey1.pem respectively.

Upload certificates to OpenShift
For this we will use the OpenShift client tools:

Thats it! Our application is now accessible through https://test.testnode.com.

 

Adding version number in Node.js app using Jenkins/OpenShift deploy

In a previous post I have illustrated how to deploy a Node.js app to OpenShift from a private GitHub repository using Jenkins.

It is often the case that you want to display the revision of the current code deployed in your test environment so you can quickly see if the running version of your app uses the latest code base. In my opinion this is a task for your build tool (such as Ant, Maven, Gradle, etc) or your automation server such as Jenkins.

I want to keep the following information in a file called version.txt and serve it when a user tries to GET it. Since I am using express, all I have to do in order to serve static files is the following:

Now, all I have to do is tell Jenkins to create the file version.txt, fill it with the necessary information and save it under public in my app’s deployment directory on the OpenShift server. You can find out the OpenShift deployment directory using the predefined environment variable $OPENSHIFT_REPO_DIR:

As we saw in OpenShift’s Jenkins configuration, a shell command is executed that deploys our Node.js application. So go ahead and navigate to YOUR_PROJECT_NAME -> Configuration. Scroll down where it says Execute Shell. This field already contains a bunch of shell commands. Append the following:

And thats it! The next time Jenkins builds your project, it will execute the above shell command which will in turn create a version.txt file and place it under public in you Node.js app. You can then access it via https://testnode-lukesnode.rhcloud.com/version.txt:

 

Deploy a Node.js app to OpenShift from a private GitHub repository

Assuming you have an OpenShift account go ahead and login. Otherwise create an account first.

Next, login to OpenShift’s web console, create a new Node.js application by clicking on Add Application… and then selecting Node.js 0.10.

Fill out the fields such as Public URL (e.g. testnode) and leave the rest with their default value and click on Create Application.

Once your application is created you will receive a repository URL to a newly created Git repository hosted on OpenShift similar to:

Install the OpenShift client tools:

Once installed, run the rhc setup command to configure the client tools. The setup wizard generates a new pair of SSH keys in the default .ssh folder of your home directory. The setup requires your OpenShift username and password.

Next clone your GitHub repository:

Add a new remote called openshift and merge your Github repository into your Openshift repository:

The git push openshift HEAD starts a new Jenkins build and deploys your Node.js application. Once it is done, you can access your Node.js application at https://testnode-lukesnode.rhcloud.com.

You can also go ahead and clone your OpenShift repository if you want to:

This includes an .openshift directory which contains various OpenShift metadata and configuration that are important for deployment.

(Optional) Setting up environment variables
It is often the case that your web application requires some sort of configuration for certain functionalities such as an email contact form. This configuration may include parameters such as username and password for the mail server. This configuration is usually done using a sensitive.config.js file which you include in your Node.js application using:

Since you want to avoid hardcoding usernames and passwords in source code files you can parse these to your application though environment variables. So at the end of the day your sensitive.config.js configuration file may look as follows:

and your repository stays clean from sensitive information. The process.env property returns an object containing the user environment. process.env is followed by the name of the variable you wish to access such as MAIL_USERNAME and MAIL_PASSWORD in our case. See more under process.env documentation.

Now, we need to setup these environment variables in OpenShift. So go ahead, fire up your Terminal and enter:

What is left to do is restart you Node.js application through the OpenShift web console.

(Optional) Skip OpenShift Git repository
You can also configure OpenShift to use only your GitHub repository for deployment by enabling Jenkins. In you web console select your Node.js application and click on Enable Jenkins. This will provide you with a Jenkins URL (e.g. https://jenkins-lukesnode.rhcloud.com/) along with a username and password. Open up you Jenkins URL and login using the provided username and password.

Taking a look at the build project’s configuration you can see that OpenShift already filled in the shell command to be executed on build:

If your repository is a private repository OpenShift will not be able to download the source code as it will not have the credentials to authenticate itself against the git repository.

To configure Jenkins to use GitHub we need to install some plugins. First, download the following Jenkins plugins:
github-api
plain-credentials
token-macro
credentials
structs
workflow-step-api
workflow-scm-step
scm-api
git-client
git
github

In Jenkins, go to Manage Jenkins then Manage Plugins and then Advanced. Then under Upload Plugin click Choose File and upload the .hpi file to install the plugin. Do this for all the plugins in the given order:
1. github-api
2. plain-credentials
3. token-macro
4. structs
5. workflow-step-api
6. workflow-scm-step
7. scm-api
8. credentials
9. git-client
10. github
11. git

Next, go to Jenkins -> Credentials -> System -> Global credentials and click Add Credentials. Choose Kind: Username with password and Scope: Global. Enter your GitHub credentials and click OK to save.

Next, go to Jenkins -> Credentials and click on Add Domain. Enter api.github.com and click OK to save.

Next, go to Jenkins -> Credentials -> System -> api.github.com and click Add Credentials. Then open GitHub in a new tab and generate a new Personal Access Token. Select the scopes repo and admin:repo_hook and click Generate token.

Back in Jenkins, select Kind: Secret text and paste your access token in the Secret field. Give your credentials a Description (e.g. GitHub access token) and hit OK to save.

Then go to Manage Jenkins -> Configure System and in the GitHub section and a new GitHub server and make sure the API URL is set to https://api.github.com. Select GitHub access token in the Credentials drop down and click Test connection. Everything should work ok. Click Save.

We want GitHub to notify your Jenkins instance whenever you push commits to the repo. We’ll use Webhooks for this. Go to your GitHub repository settings (e.g. https://github.com/lucaslouca/foobubble/settings) and click on Webhooks. Then click Add webhook. Under Payload URL enter https://jenkins-lukesnode.rhcloud.com/github-webhook/ and choose the Content type application/x-www-form-urlencoded. Finally, click Add webhook.

Back to Jenkins, navigate to your Jenkins build project’s configuration page. Find the checkbox GitHub project and check it. For Project url enter your GitHub repository URL (e.g. https://github.com/lucaslouca/foobubble).

In the Source Code Management choose Git and again enter your GitHub repository URL (e.g. https://github.com/lucaslouca/foobubble). For Branches to build set it to */master. For Credentials choose the username/password credentials you made earlier from the dropdown.

Scroll down to the Build Triggers section and check Build when a change is pushed to GitHub and click Save to save the changes.

Now when you push to your GitHub repository Jenkins will be notified via the Webhook and will start a new build. It will use the GitHub repository URL you specified under Source Code Management.

Thats it! Happy continuous delivery!

The Node.js application used in this tutorial is live at foobubble.com.

 

Keep gh-pages in sync with master

GitHub Pages is designed to host your personal, organization, or project pages directly from a GitHub repository.

Add changes to your master branch

Add changes to gh-pages

Your page will be available at https://your-github-username.github.io/repository/.

 

Install Jupyter Notebook – Mac OS X

Upgrade to Python 3.x

Download and install Python 3.x. For this tutorial I have used 3.5.

Once you downloaded and run the installation app, Python 3 will be installed under:

/Library/Frameworks/Python.framework/Versions/3.5/bin/python3

The installer also adds the path for the above to your default path in .bash_profile so that when you type:

python3

on the command line, the system can find it. You'll know you've been successful if you see the Python interpreter launch.

Install pip

Fire up your Terminal and type:

sudo easy_install pip

Install PySpark on Mac

  1. Go to the Spark downloads page and choose a Spark release. For this tutorial I chose spark-2.0.1-bin-hadoop2.7.
  2. Choose a package type. For this tutorial I have choses Pre-built for Hadoop 2.7 and later.
  3. Choose a download type: (Direct Download)
  4. Download Spark: spark-2.0.1-bin-hadoop2.7.tgz
  5. Unzip the folder in your home directory using the following command. tar -zxvf spark-2.0.1-bin-hadoop2.7.tgz. I prefer create an opt directory in my home directory and then unzip it under ~/opt/.

Next, we will edit our .bash_profile so we can open a spark notebook in any directory. So fire up your Terminal and type in:

nano .bash_profile

my .bash_profile looks as follows:

export SPARK_PATH=~/opt/spark-2.0.1-bin-hadoop2.7/bin
export PYSPARK_PYTHON="python3"
export PYSPARK_DRIVER_PYTHON="jupyter" 
export PYSPARK_DRIVER_PYTHON_OPTS="notebook" 
alias snotebook='$SPARK_PATH/pyspark --master local[2]'
export PATH="$SPARK_PATH:$PATH"

export GRADLE_HOME="/Users/lucas/opt/gradle-2.2.1"
export PATH="$PATH:$GRADLE_HOME/bin"

export ANT_HOME="/Users/lucas/opt/apache-ant-1.9.4"
export PATH="$PATH:$ANT_HOME/bin"

export M2_HOME="/Users/lucas/opt/apache-maven-3.2.5"
export PATH="$PATH:$M2_HOME/bin"
export PATH="/usr/local/mysql/bin:$PATH"

export MONGODB_HOME="/Users/lucas/opt/mongodb-osx-x86_64-3.0.4"
export PATH="$PATH:$MONGODB_HOME/bin"

export JASYPT_HOME="/Users/lucas/opt/jasypt-1.9.2"
export PATH="$PATH:$JASYPT_HOME/bin"

export JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home"

export PATH="/opt/local/bin:/opt/local/sbin:$PATH"

# Setting PATH for Python 3.5
PATH="/Library/Frameworks/Python.framework/Versions/3.5/bin:${PATH}"

export PATH

The relevant stuff is:

export SPARK_PATH=~/opt/spark-2.0.1-bin-hadoop2.7/bin
export PYSPARK_PYTHON="python3"
export PYSPARK_DRIVER_PYTHON="jupyter" 
export PYSPARK_DRIVER_PYTHON_OPTS="notebook" 
alias snotebook='$SPARK_PATH/pyspark --master local[2]'
export PATH="$SPARK_PATH:$PATH"

The PYSPARK_DRIVER_PYTHON parameter and the PYSPARK_DRIVER_PYTHON_OPTS parameter are used to launch the PySpark shell in Jupyter Notebook. The --master parameter is used for setting the master node address. Here we launch Spark locally on 2 cores for local testing.

Install Jupyter Notebook with pip

First, ensure that you have the latest pip; older versions may have trouble with some dependencies:

pip3 install --upgrade pip

Then install the Jupyter Notebook using:

pip3 install jupyter

Thats it!

You can now run:

pyspark

in the command line. A browser window should open with Jupyter Notebook running under http://localhost:8888/

Configure Jupyter Notebook to show line numbers

Run

jupyter --config-dir

to get the Jupyter config directory. Mine is located under /Users/lucas/.jupyter. Run:

cd /Users/lucas/.jupyter

Run:

mkdir custom

to create a custom directory (if does not already exist). Run:

cd custom

Run:

nano custom.js

and add:

define([
    'base/js/namespace',
    'base/js/events'
    ],
    function(IPython, events) {
        events.on("app_initialized.NotebookApp",
            function () {
                IPython.Cell.options_default.cm_config.lineNumbers = true;
            }
        );
    }
);

You could add any javascript. It will be executed by the ipython notebook at load time.

Install a Java 9 Kernel

Install Java 9. Java home is then:

/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home

Install kulla.jar. I have installed it under ~/opt/.

Download the kernel. Again, I placed the entire javakernel directory under ~/opt/.

This kernel expects two environment variables defined, which can be set in the kernel.json (described below):

KULLA_HOME - The full path of kulla.jar
JAVA_9_HOME - like JAVA_HOME but pointing to a java 9 environment

So go ahead and edit kernel.json in the kernel you have just download to look as follows:

{
 "argv": ["python3", "/Users/lucas/opt/javakernel",
          "-f", "{connection_file}"],
 "display_name": "Java 9",
 "language": "java",
 "env" : {
     "JAVA_9_HOME": "/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home",
     "KULLA_HOME": "/Users/lucas/opt/kulla.jar"
     }
}

Run:

cd /usr/local/share/jupyter/kernels/

Run:

mkdir java

Run:

cp /Users/lucas/opt/javakernel/kernel.json java/

to copy the edited kernel.json into the newly created java directory.

Install gnureadline by running:

pip install gnureadline

in the commoand line.

If all worked you should be able to run the kernel:

jupyter console --kernel java

and see the following output:

java version "9-ea"
Java(TM) SE Runtime Environment (build 9-ea+143)
Java HotSpot(TM) 64-Bit Server VM (build 9-ea+143, mixed mode)
Jupyter console 5.0.0
 

Configure Maven 3.2 to use JDK v1.7 under Mac OS X 10.9

In this article i describe how to configure Maven 3.2 to use JDK v1.7 under Mac OS X 10.9
Step 1: Download Maven
Go ahead and download the latest Maven version here. For this tutorial just download the binaries: apache-maven-3.2.1-bin.tar.gz.

Step 2: Install Maven
Once you have downloaded the zipped file extract it. There should be a folder called apache-maven-version (apache-maven-3.2.1 in my case) with contents looking something like this:
- apache-maven-3.2.1
--- bin
--- boot
--- conf
--- lib
--- LICENSE
--- NOTICE
--- README.txt

We will install Maven in ‘/usr/local/apache-maven’ So go ahead and navigate to your /usr/local/ directory. You can do this by either using the Terminal using

or just typing ‘command-shift-g’ in the Finder and then entering ‘/usr/local/’ in the input field.

Next create a new directory called ‘apache-maven’ in your /usr/local/ directory and move the
‘apache-maven-version’ folder into the newly created ‘apache-maven’ directory. So you will have
a directory structure like follows: /usr/local/apache-maven/apache-maven-3.2.1/

Open a new Terminal window and type in:

enter your password and edit your profile file to contain the following:

Hit ‘control-o’ and then enter to save and finally ‘control-x’ to exit the editor.

Step 3: Install Java v1.7
Go to the official Oracle website and download the latest version of Java SE Kit here.

Mount the .dmg file and double click the Install Package to install the latest version of Java(version 1.7.0_60 in my case).

Java will be installed under ‘/Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home’ in Mac OS X 10.9 (Mavericks). You can find this out by typing

in your Terminal.

Step 4: Configure profile file
Finally we need to set the Java home directory. Again type in the following in the Terminal:

authenticate and then enter the following:

So our final .bash_profile file looks like so:

Again hit ‘control-o’ and then enter to save and finally ‘control-x’ to exit the editor.

Step 5: Verify that everything went smoothly
Run

to verify that Maven is correctly installed. You should see something like:

Similarly run

to verify that Java is correctly installed. You should see something like:

Thats it! Thanks