A Primer On Contributing To Projects With Git

posted on 2017-03-08 at 14:10 EST

There isn't any shortage of tutorials on this subject, but I haven't seen any that attempt to guide a person that has zero experience with any of it. So, the goal of this article is to give a fresh "newbie" all of the information they need to collaborate on projects that use the Git SCM. This will not be a complete, or even a full introductory, guide to git; you should read other tutorials, and the reference docs for that.

Setup

We will focus on using the command line interface. Thus, to start, we need to setup the environment.

Windows

I am not a Windows person. My day job is a Linux administrator and my full-time desktop environment is Appple macOS. Given that, the easiest environment I have found for Windows is to:

  1. Install msysgit
  2. Install ConEmu
  3. Configure the default "task" for ConEmu to the one that uses the Bash shell provided by msysgit

Linux

This will vary based on the distribution you use. The short of it is that you need to install their package that provides git; typically, the name of the package is simply "git".

If you are using Void Linux (my preference), then I recommend installing both the "git" and "git-perl" packages.

macOS

I prefer using the "git" package from MacPorts. But the easiest way to get started is to simply open a Terminal.app session and execute git:

$ git
    $ # you will be prompted to install the necessary components
    

Speaking of Terminal.app, I recommend switching to iTerm2. It's just better.

Config

With git installed, you should now configure it to know some details about you. These details will be used to identify you on the changes that you make within a project:

$ git config --global user.name 'FirstName Surname'
    $ git config --global user.email 'your.email@example.com'
    

Authentication

We will discuss central repositories shortly, but regardless of how the project you wish to contribute to chooses to centralize, you will need to authenticate yourself when synchronizing your contributions. You have two options:

  1. Communicate with the central repositories over HTTPS
  2. Communicate with the central repositories over SSH

In case #1, you will be prompted for your user credentials each time you work with the remote system. You can minimize this by using a "credential helper." A nice overview of getting such a helper setup is available at http://stackoverflow.com/a/5343146/7979.

As for case #2, you will need to create an SSH key pair for yourself and configure the remote system to recognize it. Given the complexity of this method, we will assume the first method is being used. If you want to learn about the SSH method, then I am sure whichever central system your project uses will have instructions that will help you out.

Collaboration

Now that we have a working git environment, we can learn about how to actually use it to collaborate on a project.

Account Setup

Whether you are working solely within an institution or you are participating in an open source project, you will be working with a central repository. There are many ways a repository can be hosted centrally, but the most common, and the ones we will assume in this article, are provided by the following services:

  1. GitHub
  2. GitLab
  3. Bitbucket

All three offer some form of on-premises solution, public hosted repositories, and private hosted repositories. Regardless of the service provider and its location, you will need an account with the service. Thus, the easiest step is the first -- create an account.

For the rest of this article we will assume you created a GitHub account at github.com.

Git And GitHub

git is a distributed source code management tool. This means that git is intended to be used locally, without a central server. But all of the sites outlined in account setup are centralized server. So we need to think of sites like GitHub as a system that many people have "local" accounts on where they store their git repositories. This allows the users to make their repositories available to other users of the GitHub system, such that those users can create their own copy of other users's repositories. In turn, this allows GitHub to provide features on top of the standard git features.

As a brief overview of the remainder of this article, the workflow created by this setup is as follows:

  1. Bob creates a git repository on his local machine.
  2. Bob copies it to his GitHub account, thus making the real respository the one hosted on the GitHub system.
  3. Alice decides she likes Bob's project and wants to help him with it, so she "forks" (copies) the repository to her own GitHub account.
  4. Alice "clones" her copy of the repository on the GitHub system to her local computer.
  5. Alice tells her local repository about Bob's "upstream" (original) repository so that she can stay up-to-date with Bob's changes.
  6. Alice creates a "branch" on her local repository, makes changes, and "pushes" those changes to her fork on the GitHub system.
  7. Alice uses the GitHub interface to tell Bob about her changes, asking if he'd like to incorprate them into his original repository.
  8. Bob decides he likes the changes, accepts them, and his repository on the the GitHub system is updated.

Fork The Project

For remainder of this article we will assume that you want to collaborate on the awesome Pino project. Our first step is to create a copy of the project in our account on GitHub. So we navigate to [https://github.com/pinojs/pino] in a web browser and click "Fork" button (currently in the upper right corner of the page). GitHub will then show a screen that the process is happening, and then load the Pino repository in your account.

Cloning

Now that Pino has been forked to your account, we will "clone" it to your local machine. Cloning, in this context, is merely copying the repository from your GitHub account to your personal computer. To accomplish this task, we use our terminal and enter:

$ cd ~/Projects # or any place you want to keep a collection of projects
    $ git clone https://github.com/your-username/pino.git
    

At this point you will have a directory: ~/Projects/pino. This directory is your local copy of the repository. This local copy is automatically tied to your copy of the repository on the GitHub system. This link between your two copies is known locally by the name "origin". Within git this is known as a "remote". To see the remotes associated with your local repository, which, at this point, is only "origin", issue the commands:

$ cd ~/Projects/pino
    $ git remote
    

For more information on remotes, read https://git-scm.com/docs/git-remote.

Before you begin working on your changes it is a good idea to connect your local repository to the original repository. Colloquially, this is known as the "upstream" remote. From within your local repository, issue the following command:

$ git remote add upstream https://github.com/pinojs/pino.git
    

Create A Feature Branch

We are now ready to begin working on some changes to the project. The key to successful collaboration is to request the minimal amount of changes as is necessary to implement your idea (or fix). You should do this on a new branch within your repository. A branch is merely a snapshot of the repository at a specific point in time. By working on a branch you lock the project to the state at which you decided you want to add changes, and it makes it easier for the upstream project owners to review your changes when you submit them.

Typically, you will want to name your branch in such a way that it indicates why the branch was created. So, let's assume we want to make some documentation corrections. Enter the following command from within your local copy of the repository:

$ git checkout -b doc-corrections
    

The above command is a shortcut for the following two commands:

$ git branch doc-corrections
    $ git checkout doc-corrections
    

Every repository has what is known as a master branch. At this point, we have started a new branch, doc-corrections from the current state of the master branch. To see the branches available:

$ git branch
    

Before moving on, let's also create the doc-corrections branch within your copy of the repository on GitHub. To do this, we will "push" our branch:

$ git push -u origin doc-corrections
    

This has done two things:

  1. It has created the doc-corrections branch in your repository on GitHub.
  2. It has configured your local copy of the repository to know that the local doc-corrections corresponds to the doc-corrections in your GitHub copy of the repository. This allows for some shortcuts when issuing certain git commands.

To learn more about branching and pushing, see:

Making Changes

Now that we are on our own branch we can make our changes. For now, let's pretend you have made some typo corrections to the README.md file. Which is to say, you have opened README.md in your text editor, adjusted the text within and saved the document. If you issue the following command:

$ git status
    

You will see that git has recognized that you made changes to the file. At this point git isn't going to do anything with those changes. Files that are tracked by a git repository have two stages: modified and scheduled to be committed. In the "modified" state git merely recognizes that a file has been changed from its initial state. In the "scheduled to be committed" state git will write the changes made to the file into its internal tracking when the git commit command is run. So, let's move our changes from the modified state to the schedule state:

$ git add README.md
    

With the changes scheduled to be committed, let's actually perform the commit:

$ git commit -m 'A short summary of the changes'
    

The above is the equivalent of sending an email with nothing more than the subject line filled in. To write a full commit message, simply issue git commit. This will open the default commit editor, probably a variant of vi. A full commit message should have the following format:

A short summary of the changes
    
    A body describing in further detail your changes.
    The summary line should not exceed 40 to 50 characters, and the body
    lines should not exceed 80 characters. Thes are not hard and fast
    rules, per se, but they widely followed guidelines.
    Some projects have other requirements for commit messages, and
    may refuse changes if they are not followed.
    

With your changes committed to the local repository, it's time to send them to your copy on GitHub:

$ git push
    

We are able to use this short push command since we linked your local doc-corrections branch to your remote doc-corrections branch. If we hadn't, you'd have to issue:

$ git push origin doc-corrections
    

You can learn more about committing at https://git-scm.com/docs/git-commit.

Incorporating Upstream Changes

Prior to sending our corrections to the upstream project, it's a good idea to make sure you have any changes that have occurred upstream into your repository. To do that, we need to switch to the master branch, pull in changes from upstream, and then merge them into your doc-corrections branch:

$ git checkout master
    $ git pull upstream master
    $ git checkout doc-corrections
    $ git merge master
    

At this point git may tell you that there are conflicts. A conflict arises when the same file has been edited in both branches of the merge, master and doc-corrections in this case, such that git can't decide which change should "win." If this happens you will need to fix the conflicts and then issue a git commit.

With all of the changes incorporated, it's time to push them to your copy on GitHub: git push.

You can learn more about resolving conflicts at https://githowto.com/resolving_conflicts.

Sending Your Changes To Upstream

Now that your changes are pushed to your copy of the repository on GitHub, it's time to send them to the upstream repository owner(s) for review and possible inclusion. To do this, open your copy of the repository on GitHub. It'll be at a URL like https://github.com/your-username/pino.

With your repository open on GitHub in your browser, you should see a message suggesting that you send a "Pull Request" (PR) from your doc-corrections branch to the upstream master branch. Simply click the button in that message and you'll be taken to a form where you will can describe your PR. It will default to the last commit message in your branch, but you can change it. When you are happy with the PR message, submit the PR and wait.

GitHub is going to email the authors of the upstream project to let them know about your PR. They will review it and probably start a discussion with you, or just accept it if a discussion isn't necessary. Either way, you will receive emails keeping you informed of the process.

Once the PR has been resolved, you can remove your feature branch:

$ git checkout master
    $ git pull upstream master
    $ git branch -D doc-corrections
    

Summary

While it may seem like a complicated process, and in some ways it is, you should now be able to collaborate on a project that uses the Git SCM. In general, this is how most open source projects work. And once you have gone from fork to PR, the process shortens to simply staying synchronized, creating branches, and submitting PRs.

Don't be afraid to get involved. If the upstream people ask for changes, in most cases they are not insinuating anything about you personally. The simply want your changes to conform to the nature of their project so that your changes can be included. Or they have suggestions for improvement, so that your changes can be included.

A great place to get started with almost any project is with the documentation, just as we did in this article. If there's one thing every project wants it's someone willing to write documentation. As you get more comfortable, you will certainly start branching out from there.

By the way, I'm a maintainer on the Pino project. I look forward to seeing your PRs :)


Switching To Void Linux On My HTPC

posted on 2016-01-10 at 17:15 EST

In 2008 I built myself a HTPC. It started out running Arch Linux but switched to Ubuntu when Arch decided to force systemd. Ubuntu's Upstart didn't live up to Arch's original RC system, but it fit the bill of not being systemd. I have never liked Ubuntu so it was most certainly a stop-gap solution. My replacement for Ubuntu is Void Linux.

I discovered Void a few months ago when Debian decided to force systemd as well. Once that happened I did some digging on Distrowatch for distributions that didn't include systemd (aside: technicall I did a search for distros with a specific init system, but that doesn't seem possible at the time of this writing). After researching a few on the list it was clear to me that Void Linux would be my new distribution of choice. The release model is very much like Arch's, makes a point of not using OpenSSL by using LibreSSL instead, and uses Runit for the init system.

Regarding LibreSSL over OpenSSL: look back at the early posts of opensslrampage.org. It's very illuminating.

Runit is rather amazing in its simplicity. The flexibility of systemvinit is still present, but there's pretty much no reason to have more than 5 lines in a Runit init script; still, there are crazy people out there. The short of it is Runit doesn't fork processes. It simply starts a process and waits for it to exit. If the process does exit, Runit restarts. So a complete init script can be:

#!/bin/sh
    mkdir -p /run/samba
    exec smbd -F -S
    

That simple script is all that is necessary to start Samba. Compare that to a traditional sysv init script or a systemd script and you'll see why this is so great.

So, getting back to my HTPC. Reinstalling the base OS with Void was very easy. And installing everything I needed to run my interface (Kodi) was even easier:

$ xbps-install kodi xorg x11vnc
    

Now, at this point there's always some trickery needed to get the system to boot straight to Kodi. This time was no exception. Initially I thought I'd be able to get by with a guide on the Void wiki. But that didn't pan out: the guide assumes the user will only ever be used for logging in straight to X11. I need to SSH to the system as that user on a regular basis, so that assumption wouldn't work.

When I originally built my HTPC back in 2008 there was a display manager that supported automatic logins without much hassle (I can't recall which one). But that got replaced with SLiM. SLiM supported automatic logins, but only on the first login. If whatever program you were running, Kodi in this case, crashed then you'd be staring at login screen. Who wants to get out a keyboard and mouse to use an entertainment system? Not me. I searched for a solution and found none, so I wrote my own tool for the job. If you've read this site for a while you may have seen it listed as "mythlogin", as I originally used MythTV. Since the guide's method of automatic login wouldn't work for me I once again turned to my tool. This time I've renamed it autox; this tool will be in the official Void respository likely by the time you read this.

I originally wrote autox to be used on a sysvinit system with an inittab. When I switched to Ubuntu it turned out using autox was almost just as easy. But under Runit? It wasn't so easy:

  1. autox doesn't truly log a user in to the system. It merely sets up his regular environment with all of his PAM granted permissions, e.g. real-time clock access.
  2. simply using agetty as the guide does results in the process being launched outside of Runit's supervisor proccess. That's no good since we want Runit to manage the process.

Digging in to how Void sets up ttys I learned about a tool I hadn't heard of before -- setsid. Combining setsid with agetty did the trick. The resulting Runit script for my HTPC:

#!/bin/sh
    
    sv start wpa_supplicant
    exec 2>&1
    exec setsid -w agetty -a htpc -n -l /usr/bin/autox -o htpc tty7 38400 linux
    

Wait. What is line number three? That's how you define a dependent service under Runit. Instead of some convoluted descriptor file like Upstart and systemd want you just start the required service. In this case I need network access and my HTPC is only connected via 802.11n currently. So I need to authenticate to my access point prior to launching Kodi since Kodi uses the Internet.

There was one other problem, though. I use x11vnc to make X11 accessible from my other computers. This is handy when I need to do something with Kodi that would be a chore with just an IR remote. I had been using my .xinitrc file to launch x11vnc. I was using exec to fork it off into its own process in the background. Well, do that under this new configuration resulted in x11vnc running outside of Runit's supervisor process. Again, not good. Solution? Runit:

#!/bin/sh
    sv start kodi
    exec x11vnc -many -q -avahi -ncache 10 -passwd super_secret
    

Again, since x11vnc is dependent on X11 being already up and running I just invoke the kodi service before hand. Simple.

Finally, there is one other piece of my HTPC puzzle. I use nzbget for some things. And I let it run on my HTPC as the "htpc" user. Under the previous init systems it wasn't worth the hassle to define it as a system service. So I wrapped it in a screen script and launched it manually every time I had to reboot my HTPC (which isn't often). But there's a pretty cool feature of Runit -- user services. No more manually starting nzbget!:

#!/bin/sh
    exec 2>&1
    exec /bin/nzbget --server
    

With that run script and /home/htpc/{sv,service} I can let Runit take care of starting and stopping it. All while not having to jump through a bunch of hoops to start it as a specific user. This is something I'd love to use at work, but I'm stuck with RedHat and I'm not going to put another init system on top of an existing one (maybe).

Anyway, the point of this post was mainly to highlight Runit and Void Linux. They are a great combination for an appliance system like an HTPC. Such a system doesn't need a lot of resources, but it is better to give the actual application the majority of the resources. With Void and Runit your application gets almost all of the system resources. I'll end this post with the stats on my HTPC's currently used resources:

% free -h                                                                                                                                                   [s:127 l:385]
                  total        used        free      shared  buff/cache   available
    Mem:           7.7G        475M        2.8G         57M        4.4G        7.1G
    Swap:            0B          0B          0B
    
    % ps_mem                                                                                                                                                 [s:1 l:392]
     Private  +   Shared  =  RAM used    Program
    
     92.0 KiB +  23.5 KiB = 115.5 KiB    nanoklogd
    100.0 KiB +  26.0 KiB = 126.0 KiB    socklog
    124.0 KiB +  38.0 KiB = 162.0 KiB    uuidd
    132.0 KiB +  71.5 KiB = 203.5 KiB    kodi
    180.0 KiB +  38.5 KiB = 218.5 KiB    acpid
    176.0 KiB +  73.0 KiB = 249.0 KiB    runsvdir (2)
    192.0 KiB + 132.0 KiB = 324.0 KiB    sh (2)
    216.0 KiB + 169.0 KiB = 385.0 KiB    autox (2)
    200.0 KiB + 236.0 KiB = 436.0 KiB    xinit
    448.0 KiB + 166.0 KiB = 614.0 KiB    svlogd (5)
    448.0 KiB + 219.0 KiB = 667.0 KiB    agetty (4)
    704.0 KiB +   4.0 KiB = 708.0 KiB    runit
    740.0 KiB + 272.0 KiB =   1.0 MiB    login (2)
    932.0 KiB + 132.5 KiB =   1.0 MiB    sudo
      1.0 MiB +  90.5 KiB =   1.1 MiB    udevd
      1.4 MiB + 506.5 KiB =   1.9 MiB    runsv (19)
      1.6 MiB + 395.0 KiB =   1.9 MiB    wpa_supplicant
      2.3 MiB + 109.5 KiB =   2.4 MiB    most
      2.6 MiB + 461.5 KiB =   3.1 MiB    mandoc
      2.9 MiB + 437.5 KiB =   3.3 MiB    nmbd
      1.2 MiB +   2.7 MiB =   3.9 MiB    sshd (5)
      4.2 MiB +   4.3 MiB =   8.5 MiB    smbd (2)
      7.6 MiB +   1.3 MiB =   8.9 MiB    mosh-server (2)
     12.2 MiB + 562.5 KiB =  12.8 MiB    x11vnc
     11.1 MiB +   2.4 MiB =  13.6 MiB    zsh (6)
     14.9 MiB + 811.0 KiB =  15.7 MiB    nzbget
     29.2 MiB +   1.8 MiB =  31.0 MiB    Xorg
    411.8 MiB +   5.2 MiB = 417.0 MiB    kodi.bin
    ---------------------------------
                            531.1 MiB
    =================================
    
    % pstree                                                                                                                                                      [s:0 l:386]
    runit─┬─2*[mosh-server───zsh]
          └─runsvdir─┬─runsv─┬─socklog
                     │       └─svlogd
                     ├─4*[runsv───agetty]
                     ├─runsv───sshd─┬─sshd───sshd───zsh
                     │              └─sshd───sshd───zsh───pstree
                     ├─runsv───uuidd
                     ├─runsv───login───zsh
                     ├─runsv───smbd───smbd
                     ├─runsv───nanoklogd
                     ├─runsv─┬─svlogd
                     │       └─wpa_supplicant
                     ├─runsv─┬─mythlogin───autox───sh───xinit─┬─Xorg───{Xorg}
                     │       │                                └─sh───kodi───kodi.bin─┬─{AESink}
                     │       │                                                       ├─{ActiveAE}
                     │       │                                                       ├─{AirPlayServer}
                     │       │                                                       ├─{EventServer}
                     │       │                                                       ├─{FDEventMonitor}
                     │       │                                                       ├─23*[{LanguageInvoker}]
                     │       │                                                       ├─{PeripBusUSBUdev}
                     │       │                                                       ├─{TCPServer}
                     │       │                                                       ├─17*[{kodi.bin}]
                     │       │                                                       └─2*[{libmicrohttpd}]
                     │       └─svlogd
                     ├─runsv───login───zsh───man───most
                     ├─runsv───nmbd
                     ├─runsv───udevd
                     ├─runsv───acpid
                     ├─runsv─┬─svlogd
                     │       └─x11vnc
                     └─runsv───runsvdir───runsv─┬─nzbget───6*[{nzbget}]
                                                └─svlogd
    

Goodbye Wordpress!

posted on 2015-11-29 at 17:15 EST

For the last five years this site has been generated by Wordpress. The decision to move to Wordpress was based primarily on the ammount of spam that was being posted through the comment system I had written. Wordpress provides some great tools for fighting comment spam. But comments on weblog posts are becoming more irrelvant by the day; or rather, no one does it anymore. So I don't have need of that feature any longer.

But that's not why I have dumped Wordpress. I have dumped Wordpress because it is one giant security hole:

I could keep linking stories of its vulnerabilities all day. Suffice it to say, it is foolish to continue using Wordpress.

Given that fact, I decided to forego a dynamically generated website altogether. This site is now completely static. This site is written in nothing more than plain old HTML, CSS, and JavaScript. That used to come at a cost of maintainability. It was far easier to use a dynamic content generator for a site of any size if you wanted to be able to maintain it. Nowadays that isn't the case. There are many, many, tools for generating static websites. I even wrote one at one point (you probably shouldn't use it).

The tool I settled on using is Metalsmith. It's a very simple tool with a lot of flexibility. I won't go over it in detail here. You can read about in detail elsewhere. If you are curious about the code to generate this site, you can peruse the git repository. At the time of this writing the project is just enough to get going.

If you're using Wordpress, and want to migrate off it, then I have written a tool you might want to use -- wp-to-static. I had been wanting to do this migration since early 2015, but it took me a while to finish writing that tool (mostly due to laziness). I'm a firm believer in the 301 code. As a result, all of my old content is still available; even my old old content.

Anyway, I have been holding off writing new posts because I didn't want to add any more content to Wordpress. Now that I've moved on to this setup I will maybe write more frequently. My current goal, though, is to come up with some sort of better template/design.

Finally, I may consider adding Disqus comment system. But it's unlikely. If you have something to say about a post, you can mention @jsumners79 on Twitter or +JamesSumners on Google+.