Installing Raspbian stretch Raspberry Pi 3 B+

This is the simple way to create an SD image for your raspberry pi using mac os x high sierra. The purpose is to create a base image then add the desktop so minimal installation and add as necessary.

Required:

  • Mac OSX
  • microSD 8GB min

Download the image:

Raspbian

Select desktop or lite.

I chose lite as the base and will add my own packages.

https://downloads.raspberrypi.org/raspbian_lite_latest

After downloading unzip the file

unzip 2018-04-18-raspbian-stretch-lite.zip

Use this file or the latest updated version for when you are reading this.

2018-04-18-raspbian-stretch-lite.img

Open up the Terminal Window

Look for the SDcard after running the disk utility. It should look like /dev/disk# (# being number of the disk, it should look like /dev/disk2, confirm by checking the size and match the SDcard

  1. diskutil list
  2. sudo diskutil unmountDisk /dev/disk#
  3. sudo dd bs=1m if=~/Downloads/2018-04-18-raspbian-stretch-lite.img of=/dev/rdisk# conv=sync
  4. sudo diskutil unmountDisk /dev/disk#
  5. sudo diskutil eject /dev/disk#

After ejecting, place the SD file in the pi.

First boot of raspbian stretch on pi 3 b+ the default password is:

Raspbian | pi | raspberry

After logging in you can do whatever you want.

I used the following https://www.raspberrypi.org/documentation/installation/installing-images/mac.md for reference.

For windows or linux you can reference here

Hadoop Disk issue

I ran into a problem with hadoop where it wouldn’t startup after I reformatted a drive. To fix this make sure the VERSION number is the same across all hadoop directories

md5sum /hadoop/sd*/dfs/data/current/VERSION

If they aren’t the same version across all partitions, then you will get the error.

************************************************************/
2010-09-15 13:27:12,916 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadoop1 / 192.168.1.100
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.1+152
STARTUP_MSG: build = -r c15291d10caa19c2355f437936c7678d537adf94; compiled by 'root' on Mon Nov 2 00:44:35 EST 2009
************************************************************/
2010-09-15 13:27:13,261 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/sdb/dfs/data is in an inconsistent state: has incompatible storage Id.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.getFields(DataStorage.java:183)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:227)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:228)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:304)
at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:222)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1306)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1261)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1269)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1391)

2010-09-15 13:27:13,262 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: