Articles written by Parakh Singhal

How to Create and Apply Git Patch Files


In the years during which Git evolved, there was no provision of using a central remote repository which different contributors could use to merge their changes to a feature branch or request pull requests. Sending patch file(s) via email was the name of the game. A patch file is a handy way of encapsulating the changes introduced into the repository as part of a commit, into a single file. Patches can be created on a per commit basis or a bunch of commits can be squashed into a single patch file.

Git Format-Patch

The command that is used to create a patch file is format-patch. Various options are available that can be used in conjunction with the command to modify the output as desired. Let’s take a look at the most common operations that use format-patch command.

To learn about the command let’s create a repository and track a text file with some text entries.

$ mkdir PatchFiles
$ cd PatchFiles
$ git init
$ touch file1.txt
$ printf "first line" > file1.txt
$ git add .
$ git commit -m "First commit"
$ printf "\nsecond line" >> file1.txt
$ git commit -am "Second commit"
$ git log --online master

Creation of master branch

Next let’s create a branch named develop off of the master branch and track a text file with some text entries.

$ git branch develop
$ git checkout develop
$ touch file2.txt
$ printf "first line" > file2.txt
$ git add .
$ git commit -m "First commit in develop branch"
$ printf "\nsecond line" >> file2.txt
$ git commit -am "Second commit in develop branch"
$ git log --oneline

Creation of develop branch

Now that we have laid the groundwork, let’s understand the format-patch command. The format-patch command takes in the name of the branch against which you want to compare the commits of the current branch pointed to by the INDEX pointer. By default, the format-patch creates a patch file for every commit not available in the target branch.

$ git format-patch master
$ dir

Format Patch Multiple files

In earlier times, when there was no remote repository, such patch files used to be mailed by project contributors to the project maintainer.
Next, we will discuss the two commands that can be used to apply the patch files, i.e., apply and am.

Git Apply

There could be situations that demand the application of changes in a patch to the target branch, but, not include the corresponding commit message into the commit history of the target branch. Git’s apply command is used in such situations.

Apply command helps you to test out the changes introduced by the patch locally before you formally commit them into the repository. Since the changes are applied to the working directory, you can use git diff to view the changes applied. Also, the commit message can be a bit more descriptive signifying the nature of the changes applied.

Let’s see the command in action in code.

$ git checkout master
$ git apply 0001-First-commit-in-develop-branch.patch
$ git status
$ git log --oneline
$ dir

 05 Apply command - cropped

Git AM

Git’s am (Apply Mailbox) command is used in a situation when you are assured of the commits in a patch and want them to be applied verbatim, along with their corresponding metadata consisting of author’s information and commit message.

$ git checkout master
$ git am 0001-First-commit-in-develop-branch.patch
$ git status
$ git log --oneline
$ dir

06 AM command - cropped

Word of Caution

If you are selectively applying patches, and if a patch contains an artifact that comes to life in a prior patch, then the application of such a patch will fail. This is because patch files are commit-specific and carry changes specific to the commit.
In such cases, you will be required to sequentially apply patches in chronological order, first bringing the artifact into existence, followed by any changes in the artifact.

$ git checkout master
$ git am 0002-Second-commit-in-develop-branch.patch

07 Problem with patch application - cropped


We were not able to successfully apply the second patch as the file2.txt came into existence in the first commit, hence first patch. It would be imperative here, to first apply the first patch and then the second patch.


Git provides the ability to create patches as discreet units signifying the evolution of a project and enables the sharing of this evolution in the absence of a central remote repository.

Git offers two ways to apply patches as per requirements.

Git’s apply command applies the changes locally to the working directory and gives you the freedom to further introduce changes in the artifacts before formally committing into the repository.

Git’s am command applies the patches in the repository and includes the commit messages contained in the patches. It retains vital metadata like author information, email, etc..

A short note on Git Cherry-Pick

Key Takeaway

In the real world, situations arise where some urgent work is done on a codebase that is required to be released to mainstream users. This work then needs to be bought down into the feature branches to maintain parity with the master branch.

In Git, we have the command of Rebase to handle such a situation, but in some cases, a more surgical approach is required, whereby, only the commit having the changes is required to be applied to the feature branch and exclude anything else which might otherwise pollute the feature branch. This is what Git’s cherry-pick command achieves.

Read On

One of the founding premises of Git is to commit often and commit early. This aspect of the foundation enables you, the user, to see the evolution of your work over time and enables Git to take the responsibility to manage the evolution in a discreet order via commits. By having small commits, you enable Git to recover your work in a granular manner, should the situation come to be, as desired. And dovetailing this practice of small and frequent discreet commits, revolves the power of the command of cherry-pick.

Git’s cherry-pick command helps you apply a commit in a surgical manner such that it only brings in the changes corresponding to a commit to the branch under concern. This command can be thought to be useful in scenarios whereby you have to bring in a limited number of changes that are pertinent to your work. Examples:
1. There’s a hotfix branch created from the master branch and you need to include the hotfix in your feature branch.
2. There are a couple of commits that you need to take from some other feature branch and incorporate in your branch

Let’s take a look at an example to understand the cherry-pick command.

$ mkdir cherrypick
$ cd cherrypick
$ git init
$ touch file1.txt
$ printf "first line" > file1.txt
$ git add .
$ git commit -m "First commit in master branch"
$ printf "\nsecond line" >> file1.txt
$ git commit -am "Second commit in master branch"
$ printf "\nthird line" >> file1.txt
$ git commit -am "Third commit in master branch"
$ git branch develop
$ git checkout develop
$ touch file2.txt
$ printf “first line” >> file2.txt
$ git add .
$ git commit -m “First commit in develop branch”
$ printf “\nsecond line” >> file2.txt
$ git commit -am “Second commit in develop branch”
$ printf “\nthird line” >> file2.txt
$ git commit -am “Third commit in develop branch”
$ git checkout master
$ git log --oneline develop
$ git cherry-pick SHA1 hash of the second commit from the develop branch


Resolve any conflicts that arise from the cherry pick here and then run a git log

$ git log --oneline master


In this example, we created a directory aptly named cherrypick and then inserted some text entries into a text file named file1.txt. We then created a branch called develop from the master branch. In the develop branch we created some entries in a text file named file2.txt and then moved back to the master branch. Now listed the log entries from the develop feature branch and noted down the SHA1 hash of the commit corresponding to which we want to pick the changes from the develop branch. In this case, we picked the second commit from the develop branch.

01 Creation of repoFigure 1 Creating the repository and some initial commits

02 In Develop BranchFigure 2 Creation of a branch and some initial commits

03 In Develop BranchFigure 3 Going back to the branch where a cherry-picked commit needs to be merged

Finally, we used Git’s cherry-pick command in the master branch in conjunction with the SHA1 hash noted before. This will bring in the changes from the develop branch into the master branch that were made corresponding to the commit hash. In doing so, you may be required to resolve any conflict that may arise.

04 In Master BranchFigure 4 Process of cherry picking may throw up some conflicts which would require intervention

05 In Master BranchFigure 5 Once a commit is cherry picked, its content will then become available in the target branch

This way we can literally cherry-pick the changes corresponding to specific commit(s) in a branch and bring them to the branch of our choice.

Hope this was helpful.

Git Revert vs. Reset


We all make mistakes, and, it’s my belief that we all make an honest attempt to correct that mistake. If only, life was running on Git. Git offers two ways to rollback a change made to a codebase – revert and rese. In this article we will discuss the two commands and the scenarios in which they are apt to be used.

Basic Terminology

Before we embark on discussing the differences between the commands, it is prudent to discuss some basic terminology that will be instrumental in understanding the nuances of operations offered by these commands.
HEAD: Represents the last commit and is a pointer to the current branch selected in git.
INDEX: Represents the staging area aka the contents of the proposed next commit.
Working Directory: Contains all the files available in HEAD and INDEX and the artifacts that may never go in any commit. E.g. Assembly files (.dll, .exe) which do not get committed due to .gitignore settings.

Revert command

Sometimes it happens that wrong work gets committed and is pushed into the remote repository, making it available to the rest of the team, or project contributors, as the case may be. In such circumstances, it becomes imperative, that we introduce a context under which a rollback of the unintentional changes take place.

Revert offers just such functionality. It creates an opposite commit of the commit that you want to roll back. Such a rollback commit retains a detailed history of all the commits that came before it, thus avoiding all the confusion.

Run the following commands in bash shell to simulate a revert:

$ mkdir Revert
$ cd Revert
$ git init
$ touch file.txt
$ printf “First commit”>file.txt
$ git add file.txt
$ git commit -m “First commit”
$ touch file2.txt
$ printf “Second commit”>file2.txt
$ git add file2.txt
$ git commit -m “Second commit”
$ printf “\n\nThird commit”>>file.txt
$ git add file.txt
$ git commit -m “Third commit”
$ dir

Note that there are two files file.txt and file2.txt in the INDEX and working directory.

$ git status
$ git log –oneline

Note there are three commits. Now let’s revert the second commit. Since the file2.txt was created as part of the second commit, it should get deleted from the working directory and INDEX

$ git revert SHA1 hash corresponding to the second commit.
$ dir
$ git log –oneline
$ git status

The result of following the aforementioned commands should look something like:

01 Revert Commit Log Cropped

As you see, now we have a total of four commits in the history, with the fourth one clearly jotted down as a revert commit, erasing the work done as part of the second commit.

Reset command

Reset is used when you have the fault available locally on your system and has not been pushed out to the remote repository. Now this command offers a further three flavors and it will be here that we will be chiefly leveraging the terminology covered earlier.
Soft Reset: This option moves the branch (that HEAD points to) to the commit mentioned, reinstating the conditions as they were when the commit was made. No changes are introduced in the INDEX (staging area) and working directory. This essentially does the same this as “git commit –amend” and offers us a chance to change what we need to.
This means that you will get a chance to amend the commit message. Note that no changes are introduced into the file(s) made prior to the rolled back commit in soft mode.
Mixed Reset: This option moves the branch (that HEAD points to) and INDEX (staging area) to the commit mentioned, reinstating the conditions as they were when the commit was made. No changes are introduced in the working directory.
Hard Reset: This option moves the branch (that HEAD points to), INDEX (staging area) and the working area to the commit mentioned, reinstating the conditions as they were when the commit was made. This is destructive in nature and should be used with caution as work is not recoverable after the issuance of this command.

NOTE: In none of the cases HEAD is moved to the mentioned commit. HEAD always point to the branch. It is the branch that is made to point to the pointer corresponding to the commit being reset.

Run the following commands in bash shell to simulate all three flavors of reset:

$ mkdir Reset
$ cd Reset
$ git init
$ touch file.txt
$ printf “First commit” > file.txt
$ git add file.txt
$ git commit -m “First commit”
$ printf “\n\nSecond commit” >> file.txt
$ git add file.txt
$ git commit -m “Second commit”
$ printf “\n\nThird commit” >> file.txt
$ git add file.txt
$ git commit -m “Third commit”
$ git status
$ git log –oneline
$ git reset --soft head~1

Now observe the changes made by the soft reset command

$ status
$ git log --oneline

02 Soft Reset

The following are the results of the soft reset:
1. The third commit has been rolled back as seen in the commit log,
2. No changes have been introduced in the INDEX i.e. the staging area. The git status helpfully suggests that file.txt is ready to be committed into the repository. The contents of the file.txt are the same as they existed prior to the rolled back commit,
3. No changes have been introduced in the working directory. All the contents are intact.

Let’s continue our example to mixed reset:

$ git add file.txt
$ git commit “Third commit”
$ git reset --mixed head~1

Now observe the changes made by the mixed reset command

03 Mixed Reset Cropped

The following are the results of the mixed reset:
1.    The third commit has been rolled back as seen in the commit log,
2.    Changes have been introduced in the staging area, as suggested by the status. It helpfully suggests that file.txt needs to be added in order for it to be committed.
3.    No changes have been introduced in the working directory.

Let’s continue our example to hard reset

$ git add file.txt
$ git commit “Third commit”
$ git reset --hard head~1

Now observe the changes made by the hard reset command

03 Hard Reset Cropped

The following are the results of the hard reset:
1. The third commit has been rolled back as seen in the commit log,
2. INDEX i.e. staging area has been put into the state as it existed after the second commit and prior to the third commit. The data corresponding to the third commit has been lost from the file.txt.
3. Working directory has been put into the state as it existed prior to the third commit. Any contents introduced between the second and the third commit would have been lost as part of the hard reset of the third commit.


Now that we have seen the difference between revert and reset and when the two should be used. We also saw the three modes made available by revert - soft, mixed and hard reset and how they affect the state of the working directory, INDEX and the content. Hard reset should be used with caution since the content is not recoverable after the issuance of the command.



Git Rebase vs. Merge


Every version control system offers a core set of functionalities, of which, the ability to create branches and then merge changes into branches are offered by both central and distributed version control systems. The way different systems behave is different and hence, often, while the result will be the merging of changes, the way it accomplished and the resulting history created are different.

Consider the following scenario:
You have created a feature branch from a long-running branch. Someone in the team commits in changes into the parent long-running branch and you have to bring in the changes into the feature child branch that you are using for active development. There are two ways to do that in Git:
1.    Merge command
2.    Rebase command

Both the commands achieve the same outcome of integrating the changes from parent long-running branch into the feature child branch. Where things differ, is the resulting commit history that gets created due to the usage of the commands.

Merge command

In the aforementioned scenario, if you use the merge command, the resulting commit history will bear a merge commit. Note that the merge command does not alter the history of your feature branch in any way. On the contrary, the merge commit provides a context for the changes bought from the parent branch.
Run the following commands in bash shell to simulate a merge in git to bring in changes from a parent branch to a child branch:

$ mkdir Merge
$ cd Merge
$ git init
$ touch file.txt
$ printf “First instalment of work done in master branch” > file.txt
$ git add .
$ git commit “First commit in master branch” 
$ git branch dev
$ git checkout dev
$ printf “\n\nWork done in dev branch” >> file.txt
$ git commit -am “First commit in dev branch”
$ git status
$ git checkout master
$ printf “\n\n\n\nSecond instalment of work done in master branch” >> file.txt
$ git commit -am “Second commit in master branch”
$ git checkout dev
$ git merge master
Resolve conflicts, if any. Perform a git commit which is going to be a merge commit.
$ git log –oneline –graph

The result will be something like shown in the image below:


Log history after a merge. Notice the merge commit made in the end

Figure 1 Log history after a merge. Notice the merge commit made in the end

Rebase command

Rebase command, as the name suggests re-creates the base for the child branch while bringing in the changes from the parent branch. This results in a cleaner, unidirectional history, but the context under which the changes were bought in, gets lost. It appears that the child feature branch always worked with the changes bought from the parent long-running since the beginning of its creation.

Run the following commands in bash shell to simulate a rebase in git to bring in changes from a parent branch to a child branch:

$ mkdir Rebase
$ cd Rebase
$ git init
$ touch file.txt
$ printf “Work done in master branch” > file.txt
$ git add .
$ git commit -m “First commit in master branch”
$ git branch dev
$ git checkout dev
$ printf “\n\nWork done in dev branch” >> file.txt
$ git commit -am “First commit in dev branch”
$ git checkout master
$ printf “\n\n\n\nSecond instalment of work done in master branch” >> file.txt
$ git commit -am “Second commit in master branch”
$ git checkout dev
$ git rebase master
Resolve conflicts, if any. Perform a git add . followed by git rebase –continue
$ git log –oneline –graph


The result will be something like shown in the image below:

Log history after a rebase. Notice the second commit from master inserted as a base commit for dev branch

Figure 2 Log history after a rebase. Notice the second commit from master inserted as a base commit for dev branch


The net outcome of both the rebase and merge command as seen above is the same i.e. integration of changes from one to another branch, here parent to child branch.
Now, naturally, the question arises as to when to use what.
Merge is a non-destructive operation that preserves the chronological order of commits verbatim. The merge command creates a merge commit which brings in a convergence point into the commit history, thereby, bringing in the context under which the integration of changes occurred. This is essential when you are working on a public project and want every developer to have a shared context.
Rebasing a feature branch with changes bought from the long-running parent branch creates a clean linear history in the feature branch. But this eliminates the context under which the activity of rebasing was done. It is preferred when you are working as part of a small team and, it is relatively easy to collaborate and communicate with all the developers about the changes done to the feature branch.



Understanding Patch in Git via Interactive Staging

Git is a super flexible version control system and offers capabilities that you may have felt the need for when working with other version control systems, but were not available. One such capability that is available is the ability to assign parts of a file in a commit, thereby, creating a logical sequence to the work done in a file over time.

Let’s understand the patch operation in Git via a simple example.

Consider a text file that you have created over time and you want the work to go in two separate commits representing a logical sequence to work:

$ mkdir PatchDemo
$ cd PatchDemo
$ git init
$ touch file.txt
$ notepad file.txt


Key in the following lines inside the file as shown in the image below:

Line 1 - Work done in first commit

Line 3 - Work done in first commit

Line 5 - Work done in first commit

Line 7 - Work done in first commit

Line 9 - Work done in first commit

Line 11 - Work done in first commit

Line 13 - Work done in first commit

Line 15 - Work done in first commit

Line 17 - Work done in first commit

Line 19 - Work done in first commit

Line 21 - Work done in first commit

Line 23 - Work done in first commit

Line 25 - Work done in first commit

The reason why we are introducing text in this fashion is to allow sufficient room for Git to differentiate between the work done in different logical sequences. In Git terminology, the work that needs to go in a certain patch is called a hunk. A patch can contain several hunks.

For text to be considered in different hunks, there needs to be a large enough difference between pre-existing work and new work meant to go in a hunk. Hence alternate lines in the original file would constitute a somewhat large enough body of pre-existing work.

Now let’s commit the work done:

$ git add file.txt
$ git commit -m “First commit consisting of work done in file.txt”

Now let’s modify the file:
Line 1 - Work done in first commit
Line 2 – Work done in second commit
Line 3 - Work done in first commit

Line 5 - Work done in first commit

Line 7 - Work done in first commit

Line 9 - Work done in first commit

Line 11 - Work done in first commit
Line 12 – Work done in third commit
Line 13 - Work done in first commit

Line 15 - Work done in first commit

Line 17 - Work done in first commit

Line 19 - Work done in first commit

Line 21 - Work done in first commit

Line 23 - Work done in first commit
Line 24 – Work done in second commit
Line 25 - Work done in first commit

Now let’s do the interactive staging of the file.txt and use the patch functionality to stage file.txt for two separate commits:

$ git add -i

This will start the interactive staging and it will look somewhat like in the following figure:

Patch using Interactive Staging

Once you enter interactive staging, you will be asked to choose from several options viz. status, update, revert, add untracked, patch, diff, quit and help. You need to choose “p” or number 5 to perform a patch.

Once that is selected, git presents you with a list of files that are available in the repository. Note, that in this case we only have one file and it shows the status of the file as consisting of 3 unstaged changes. You need to enter the serial number of the file you want to patch, thus, in this case, that would be 1.

Git will show an asterisk next to the selected file and will ask for permission to move further. You need to hit enter once to permit Git to move ahead.

Once we permit to parse the file, Git will present you hunks i.e. changes that the file contains that are not available in its last committed version. The lines in green show the changes, while the ones in white show the pre-existing work. Just below the hunk is the query posted by Git, and presented with one letter acceptable answers. If you want to know more about the meaning of those letters, press “?”. It will present you with the meaning of all those options like in the following figure:

Patch Options

In our case, we want to stage the first and the third hunks. Git then moves forward and presents you the next hunk. In the second hunk, we have the line added “Line 12 – Work done in the third commit”, and accordingly we would preserve it for the third commit. So you need to press “y” for the first and third hunks and n for the second hunk.

Once you are done marking the hunks that need to go into a commit, you will be re-directed to the initial menu. This time, press “s” for getting the status and you would be presented something like shown in the following figure:

Status on Patch

We can see in the image that now we have staged two changes and one remains unstaged. The staged changes are the two hunks that we staged in our previous steps of selecting hunks, while the one unstaged change is the hunk that we did not selected to go into our patch.

If you run a diff, then you will be able to see the changes that will go into the next commit, appearing in green and the existing work appearing in white, as shown below:

Diff of staged changes

Now let’s quit this interactive staging utility and run git status. This would give us something like shown in the following figure:

Git Status

The reason Git is showing the same file in the staged and unstaged form is, because we have staged only part of the file in the form of the selected two hunks.

Now we can go about our business as usual, and make two commits – The first commit consisting the two selected hunks and the second commit having the rest of the changes, as shown in the figure below:

Final Commits

As you can see the first commit carried two changes and the second commit carried a single change. Once we are done with the two commits, there were no changes to be committed any further.

How to Operate Multiple GitHub Accounts from a Single Computer

Developers love to use a single computer for all their needs, whether they are related to office work or work on their personal projects. Since using version control is a cardinal requirement in any software project, personal or professional, it becomes imperative that a requirement arises, whereby, a developer is required to operate multiple GitHub accounts from a single computer.

Consider a scenario that you have two GitHub accounts – one sponsored by your employer and the other one personal and you want to use a single computer to operate both of them. This is possible by virtue of SSH keys and setting remote repositories under the correct SSH key. The following is the broad outline of the article:

1. Setting up SSH keys

2. Setting up a configuration file to use easily co-ordinate among multiple accounts

3. Setting remotes correctly

1. Setting up SSH Keys

SSH keys generated using RSA algorithm, generates a pair of keys – public and private. Per the norm, the private key remains secure with you and never travel over the wire, while public key is distributed for verification. In this case the public key is stored in your GitHub account.

On Window 10 open Git Bash, navigate to C:\Users\YourMSID\.ssh and key in the following command:

$ ssh-keygen -t rsa –b 4096 -C "your personal email address"


This will prompt you to create a new file. Provide a suitable name to the file so you are able to differentiate between various files and hence various keys. Also make sure to enter a suitable passphrase. Providing passphrase further encrypts the private SSH key using a symmetric encryption algorithm, and will render the theft of the private key useless.

Now generate a key-pair for the official account in a manner similar to described above:

$ ssh-keygen -t rsa –b 4096 -C "your official email address"


The next step after the generation of the key-pair will be to copy over the public SSH keys into respective GitHub accounts. I am assuming a generic name generally given to key-pair given to personal accounts. Print on the screen and copy the relevant section:

$ cat


Make sure that you copy the key that starts with “ssh-rsa” and ends with gibberish. Do not copy the email part.

Now navigate to your personal GitHub account and open account settings. Open the “SSH and GPG Keys” section. Click on “New SSH Key”, and copy over the key. Make sure to give a suitable title to remember the computer that the key is present on. You may generate a key on some other computer tied to your personal account. A suitable title will help you connect the key and the originating computer where it came from.

Perform the same routine for the SSH key corresponding to your official account with corresponding account on GitHub.

2. Setting up a configuration file to use easily co-ordinate among multiple accounts

Now to make life easy with multiple keys stored on a single computer, we will create a configuration file containing the details about the hostname which needs to be connected to and the account details with which to authenticate. Create a config file with the following command:

$ notepad config


And copy over the following details:

# Personal
Host personal
User git
IdentityFile ~/.ssh/id_rsa_personalaccount

# Work
Host work
User git
IdentityFile ~/.ssh/id_rsa_workaccount

One very important thing to note here is to provide the hostname as given in the sample configuration and the user as “git” in both the cases.

The next step will be to make sure that SSH agent is running on the machine. Run the following command on bash shell to make it run in the background:

$ eval $(ssh-agent -s) 

This will start the SSH agent of it is not running in the background and will allocate it a process identifier. Now we will add the SSH keys for personal and professional accounts to the agent by the following command:

$ ssh-add id_rsa_personalaccount

$ ssh-add id_rsa_workaccount

If you had provided passphrases while creating the SSH keys, then you will have to provide the corresponding passphrases before adding them to the agent.

In order to make sure that we have indeed added the identities, run the following command:

$ ssh-add –l

Now that we have the keys added in the SSH agent and have the configuration file set up, we are in a position to test the authentication by connecting to GitHub with credentials as described in the config file. Ru the following command:

$ ssh –T personal

The aforementioned command will make ssh module take the config file placed in the .ssh folder by default and use host information defined therein. The result of executing the commands should be something like below:

Hi parakh! You've successfully authenticated, but GitHub does not provide shell access.

Repeat the process for the work account.

3. Setting remotes correctly

Now that we are done setting up the keys and configuration file, and have tested the authentication, we will create a dummy repository and push the changes to it. On GitHub, create a dummy repository, say, with the name DummyRepo. This will be the repository which we will use to upload our changes to, and pull the changes from. Navigate to a suitable location on your hard drive and run the following commands to create a repository mapped to DummyRepo:

$ mkdir DummyRepo
$ git init
$ touch file1.txt
$ printf “This is my file” >; file.txt
$ git add .
$ git commit -m "first commit"
$ git remote add origin git@personal:UserNameOfPersonalAccountAtGitHub/DummyRepo.git
$ git push origin master

If all the setup has been done correctly, then the push of changes to DummyRepo will succeed. Now test the pull from the repository. Create a text file over at the GitHub repository and commit it. Now run the following commands:

$ git pull origin master

This should succeed in pulling the newly created file and commit.

Hope this article was helpful to you.

How to shortlist and buy a car

In today’s market, a consumer is spoiled for choice and buying a car can be a confusing and tiresome exercise. Recently, I was in the market for a new car and wanted to jot down the process that I went through to finalize my purchase.

I will be segregating this article into different sections, and depending upon your maturity level into the process, you can either go through the entire article, or jump to the relevant section that intrigues your interest. Sections are as follows:

1. How to start your car hunt

2. Selection by features

3. Selection by budget

4. Selection by after-sales support

5. Selection by resale value

How to start your car hunt

It is advisable to foresee your requirement of a car at least 3 months prior to purchase. The reason why this period is required, especially if you are a first-time buyer, is because once you come to the conclusion of buying a new car, you will be able to better observe yourself and judge your requirements, research prospective candidate cars better, plan your finances and most importantly, have room to negotiate for the best possible deal.

There are a couple of things that you must do irrespective of the aforementioned methodology you choose to buy the car of your choice:

1. Talk and visit multiple dealerships and take at least 2-3 test drives of the cars in your shortlist,

2. Make sure that the car you test drive is not having more than 5000 Kms. on its odometer. The more a test car has been driven, the less it will reflect the true potential of a new car of the same make and model.

3. Design your test drive in such a way that the path undertaken would include heavy and moderate traffic, include at least two red light junctures and at least one U-turn. The purpose of such a design is to test the acceleration of the car when starting from a stationary position at a red light, lane changing and handling capabilities, steering feedback and get a feel for the space required to turn the car around on a U-turn. Specifications are one thing but, driving dynamics of a car can only be felt when a car is actually driven.

4. Make sure that you are using the air conditioner (summer and/or winter) of the car at the setting of your choice and observe the blower noise levels.

5. If you like a quiet cabin, then observe the Noise-Vibration-Harshness (NVH) levels both when the car is idle and when being driven. Car at idle will give you an idea of the NVH levels introduced due to engine. Car while being driven, will give you an idea about the NVH levels due to engine, tires, head wind and cross wind.

6. Do play music while test driving the vehicle. It will give you an indication of the capabilities of the music system and the level to which it can suppress noise while the car is being driven.

7. Try to schedule test drives of different cars in your shortlist in a back-to-back fashion. That will project a better comparison of driving dynamics of different cars.

8. If you have an old car, never sell that car before buying the new one. If you are planning to exchange it in lieu of the new one, then handover your old car on the day of the delivery of the new one. Due to circumstances beyond anyone’s control, a car dealership may not deliver you your new car on the promised date and if you get rid of your old car before getting a new one, then you may suffer from limited mobility and increased travel expenditure and time investment.

9. When you talk with a dealership, express your interest to buy the car in a span of two-three weeks. This conveys to a dealer that you are serious about the purchase and they will try to give you the best possible deal.

10. There are three periods in any given calendar year, which are generally the best possible times to buy a car – the month of March which signifies the closing of the financial year, festive season of Dussehra and Diwali in India (will vary from country to country), and the month of December which signifies the closing of the calendar year. Car manufacturers give the best deals in these three periods. Car manufacturers in India generally increase prices of cars from January 1 of a new calendar year, so you may want to avail the discounts that are offered at the end of a calendar year.

Start with your functional requirements whether you require a hatchback, an entry level sub-compact sedan, a full sedan, utility vehicle like an SUV, a van or a pick-up truck. Since car is something that is generally kept for a minimum period of 3-5 years, try to project your requirements, which may not exist at present. For example: You may not be having a whole lot of parking space available at the time of car purchase, which may tilt your decision towards buying a compact hatchback, but a year into the future, you may be planning to purchase a new home with a dedicated garage. In such circumstances, it is better to buy a compact sedan, than a small hatchback, which you may feel out of place or inadequate once you move in into your new home.

Selection by features

Under this method of selecting a car, you can focus on the features that you must absolutely have and features that are nice to have.

Since you are reading this article, I would humbly request that you start with safety features first and then focus on anything else. Include as many safety features as possible, and then shortlist the cars that fit the bill. In my opinion, a car bought in 2019 should at least have the following safety features:

1. Antilock Braking System (ABS) (mandatory in India beginning April 1, 2019),

2. Electronic Brakeforce Distribution (EBS) (mandatory in India beginning April 1, 2019),

3. Driver (mandatory in India beginning April 1, 2019) and co-passenger side airbags,

4. Rear parking assist (mandatory in India beginning July 1, 2019),

5. Engine immobilizer with floating code (prevents someone from copying the cryptographic keys),

6. Central locking with audible and visible door ajar warning (car honks and blinks indicators in case any car door is not closed properly when locked from outside) and distress alarm (car honks continuously and blinks indicators to attracts attention),

7. Rear de-fogger (this is a safety feature, not a convenience as manufacturers project),

8. Rear washer and wiper (again a safety feature, not a convenience),

9. Child locks in rear doors,

10. Adjustable head restraints in front and back (This again is projected as a comfort feature but is a safety feature. Fixed head restraints are generally given in entry level hatchbacks and compact sedans as a cost cutting measure and due to their small size, may not prevent whiplash injury to driver and/or co-passenger(s),

11. Rear seats with ISOFIX mounts for child seats,

12. Speed dependant auto door locks,

13. Front and rear fog lamps,

14. Day and night Inside Rear-View Mirror (IRVM) (helps reduce the glare caused by headlights at high-beam of cars coming from behind at night),

15. At least one stability system like traction control system, hill assist, corner stability system etc. which can help in manoeuvrability in speedy or tricky situations,

16. A crash test rating of at least 3 (in reference to one of the established crash testing regimes like Global NCAP or Euro NCAP)

Please note that apart from the airbags and head restraints, all the aforementioned safety measures are active-preventive in nature i.e. they help you reduce the probability of a crash. Airbags, head restraints and the physical structure of a car are passive-preventive safety measures which help passengers survive a crash.

With safety features sorted out, focus on the creature comforts. I am listing a couple of them with some rationale:

1. Automatic temperature control air conditioning (it is expensive, but saves fuel by automatically switching off the aircon when desired temperature is reached),

2. Electric power steering (better than hydraulic, as it provides for a better feedback and does not present the probability of an oil leak),

3. Height adjustable driver seat (improves visibility for short drivers),

4. Tilt and height adjustable steering (improves handling and driving comfort for short drivers),

5. Dead paddle or driver footrest (saves ankle strain when driving for long stretches on highway),

6. Projector headlamps (helps better focus light beams, plus looks cool),

7. Electrically adjustable Outside Rear-View Mirrors (ORVM) (comfortable to use),

8. Steering mounted audio and phone controls (reduces distraction),

9. Speed dependant volume control (again reduces distraction and is thoughtful engineering),

10. Distance to empty in driver information display (helps plan re-fuelling),

11. Multiple drive modes like sports, city and economy (eco mode saves fuel when on highway),

12. Seats with lumbar support and side bolstering (this is better felt than read as a specification),

13. Touchscreen enabled in-car entertainment with navigation maps (screen should be capacitive and at least 7 inches in size (measured diagonally)),

14. Wired (via USB port(s)) and wireless fast charging for mobile phones both in front and at rear,

15. Tire Pressure Monitoring System (TPMS) (helps you maintain appropriate tyre pressure which results in better road grip, superior handling, better ride quality and minimum fuel consumption)

It is always a better idea to get as many factory-fitted features as possible as you get to enjoy the manufacturer’s warranty on those items, as against to a non-existent or hard to obtain on after-market solutions. In a lot of cases, you can also avail extended warranty on a lot of features that come in factory-fitted condition.

Selection by budget

This selection methodology warrants minimum explanation. One advise that bears mention here is that it is always a good idea to keep a maximum figure in mind and then keep a 5% margin on top of the maximum figure. This will help in case the manufacturer raises the price of the vehicle or you end up selecting a variant with higher specifications.

Also keep in mind that sometimes it is profitable to finance the car purchase, rather than pay up lump-sum from personal finances. This is pre-dominantly true in a vibrant economy where you can earn more money by investing and earning interest on your principal amount and take advantage of the low interest rates on car loans. However, it makes sense to buy a car with lump-sum money if you want to save the hassle of going through paper work required to take a car loan and once the loan is paid, to get the car transferred in your name.

Selection by after-sales support

Buying a car and getting after-sales service support for the car are two different experiences. A car may be solidly built and jam packed with features but can leave a sour taste if the after-sales support is sub-par. A lot of users want top notch hassle free after-sales support for their vehicle. Fundamentally this translates into a couple of things, not limited to:

1. They want to feel taken care of when they visit the service centre. They need the service advisor to listen to the problem(s) that are facing, accurately diagnose the problem and provide a quick and cheapest resolution. A customer wants to feel pampered, not necessarily with tea, muffins and croissants, but with a caring attitude from the service staff.

2. Fast turn-around time for service. Since the car a customer has given for service or repair may be the only car that he/she might be having, it is imperative, that he/she would want the vehicle as soon as possible. One thing to note here is that, even if the turn-around time cannot be measured in hours, even if it is a day or two, a customer would appreciate, if it is accurate and service is done with focus on quality.

3. Readily available spare parts is another factor that is critical in having a fulfilling experience in this dimension.

Rely on the feedback provided by other customers in this area. Even the best of the car manufacturers can deliver a sub-par after-sales support experience. Keep in mind that at the end of the day, you will be dealing with human beings in the service centre and they may be having a bad day. Give them at least two separate chances to serve you, and then decide for yourselves. In case you get a consistently bad experience from a service centre, change the service centre.

Selection by resale value

If you are the kind of person, who likes to change his/her car every few years (read 2-4 years), your decision may also get affected by the perceived resale value of the car in the market. Some cars, even though have everything better than their rivals, fetch poor resale value in the market owing to a negative perception. That perception may be due to lack of perceived reliability, cost of maintenance, lack of after-sales support service centres, or a combination of all the aforementioned factors. In such kind of circumstances, it is best to stay away from such a car and go with one which may be your second choice but would fetch your greater resale value.

If you intend to keep the car for as long as possible, then you can overlook this factor.


I hope this article gave you some insight in a succinct manner on what selection criterion you may use for your next car purchase. It is in no way exhaustive, but definitely will nudge you in the right direction.

Raspberry Pi and Passwordless SSH Login

In any modern operating system when you login, you are greeted with a login screen asking for your credentials. If you are the only user using the system, you may be spared the labor of filling in the username, but a password still will be required to login.

We can forgo the exercise of filling the password by virtue of asymmetric encryption. Asymmetric encryption makes two types of keys available – private and public. As the name suggests, public key can be made available to the public while the private key remain with the system which needs to do the authentication. In our case we will be logging into Raspberry Pi using SSH and will use key based authentication mechanism to login, forgoing the need of any password. Pi will send the public key over the wire to the host operating system running Putty which will then compare it with the companion private key. If a match is found, the user authenticates successfully. Note that the private keys never travels over the wire.

We need the following to make this a possibility:

1. PutTTYgen: To generate a pair of keys,

2. Pageant: To run in the background and maintain availability of the private key

Both the aforementioned software components come bundled with Putty, so if you have Putty installed, there’s a good chance that they are already installed on your system.

Generating a key pair

Open PuTTYgen and click on the “Generate” button generate a pair of keys. Make sure that “RSA” algorithm is selected with key strength of 2048 bits. Once generated, use the in-built facility and save the public and private keys to the folder which you consider save enough to retain your private key for future reference. DO NOT SHARE YOUR PRIVATE KEY WITH ANYONE.

Generate key pair with PuTTYgen

Now, the most important part. If you look at the format of the public key saved by PuTTYgen, you will find that it spawns multiple lines. It is un-usable in majority of the systems and exists only for reference. We need to copy the public key in the large “Key” window, which specifically makes the key properly formatted for use in OpenSSH based authentication systems.

02 PuTTYgen Keys Window


Copy the key into a simple text file and name it “authorized_keys” and remove the txt extension. This is the file that will be used by Raspbian Stretch operating system without any further configuration.

Now run the Pageant agent in your Windows system and add the private key generated previously. The private key should have an extension “ppk”. Pageant agent will run on the host operating system where from you want to connect and will keep the private key handy.

03 Pageant

Configuring Raspberry Pi

Now let’s configure our Raspberry Pi to accept key based authentication. Login the usual route with your username and password and follow the steps:

1. Create a .ssh folder (hidden folder) in the home directory of the user for whom you want to use key based authentication.

2. Copy over the public key (NOT PRIVATE KEY) that you generated previously and named “authentication_keys” to the folder. I used a thumb drive for the purpose.

3. Secure the key file and the .ssh folder. Only the user meant to use the key based authentication should be able to access the key file in read-only and executable capacity. The .ssh folder should be off limits to everyone else.

4. Restart the ssh service.

5. Logout and log back in with the username for which you enabled the key based authentication.

mkdir .ssh
sudo mount /dev/sda1 /mnt/usb
cp /mnt/usb/authorized_keys .ssh/
sudo chmod 500 .ssh/authorized_keys
sudo chmod 700 .ssh
ls -al /home/parakh .ssh/authorized_keys
sudo systemctl restart ssh


04 Commands Cropped












All this was made possible by the magic of asymmetric encryption.

05 Login Cropped

The good thing about this scheme is that if, for some reason the public key on Raspberry Pi gets corrupted, or the Pageant is not running in the background on the host operating system, then you get offered the good-old password challenge. I purposely exited the Peagent and as expected Pi challenged me with a password corresponding to my account.

06 login using password cropped


1. Passwordless SSH access

ASP.Net Core MVC on Raspberry Pi

Key Takeaway:

.Net Core allows for a cross platform operation of applications on supported hardware and software. This extends to ASP.Net Core. In this post I am going to show how to run ASP.Net Core in self-contained deployment mode on Raspberry Pi 3.

Read On

In my last post I showed how to run a .Net Core console application in Raspberry Pi. In this post I am going to show how to run an ASP.Net Core Web application on Raspbian Stretch operating system using Raspberry Pi 3 hardware. Before you do that make sure that you have assigned a static IP address to Pi. You can learn how to do that in one of my previous post.

First create a new ASP.Net Core Web application project in Visual Studio which does not rely on any kind of authentication.

ASP.Net Core Web App

ASP.NET Core Web Application

02 No authentication

Web application with no authentication

Since the aim of this post is learn how to run an ASP.Net Core application on Pi, let’s keep things simple. We will not do any modification to any of the pages in the application. Build and run the application locally to make sure that it works.

03 ASP.Net Core app running

Web application running out of the box

The application is running locally using IIS Express and listening at the address mentioned in launchSettings.json file under Properties in the project hierarchy. When it comes to hosting the application in Pi, we need to makes sure that the application listens at the desired IP address and port. This is accomplished using the “UseUrls” method in Program.cs file. The “UseUrls” method specifies the URL scheme that the web host will use to listen to the incoming requests. Since we will be using the Kestrel web server via terminal in Pi, it is important that we change the port in the Program.cs file, as shown in the image. Make sure that the port that you assign is not in use by some other app in Pi.

04 Program.cs file

Change the port to something that is available in Pi

Now publish the entire application for linux-arm combination using the following command:

dotnet publish -r linux-arm


Now copy the entire publish directory to Pi. This will give us not only our application, but also the server infrastructure to serve the application. Make sure that you have the appropriate permission to run not only the application, but also the Kestrel server under your account. You can use the following command to recursively allow your account have the execute permission on all the assemblies inside the publish folder.

chmod –R 755 publish


Once that is done, execute the application:

05 Kestrel running

Kestrel running

Now hop into your browser in your computer and use the IP address of your Pi in conjunction of the port on which the Kestrel server is listening.

06 Application running locally

ASP.NET Core Web application being served by Pi

Happy exploration.

Running a .Net Core Application on Raspberry Pi

Key Takeaway

Raspberry Pi is an experimenter’s dream come true. It offers light to medium computing power in a small form factor with all the bells and whistles like Wi-Fi, Ethernet, Bluetooth, USB 2.0 ports etc. Further augmenting a tinkerer’s abilities are the software capabilities, which now stand further extended due to the introduction of .Net Core which allow you to leverage your existing background with Visual C# and run code on ARM architecture. In this post we will take a look at all the steps that one has to perform to run code created using Visual C# and targeting .Net Core on a Raspberry Pi 3.

Read On

.Net Core is a cross platform offering from Microsoft which allows you to run your C# code on multiple hardware (x86, x64, ARM etc.) and software platforms (Windows, Linux and macOS). Of course, it does not provide universal coverage, and at the time of writing this post, industrial grade long-term support versions of operating systems are being targeted with higher priority by Microsoft. It is natural, after all those are the operating systems that organizations would be using to run their applications.

But there’s something about frugal engineering and Raspberry Pi is a prime example of that. With support from the community, it is possible to have the code made targeting .Net Core, be run on Raspberry Pi 3. Here are some of the points that you may want to read before going any further:

1. Note that the compilation targeting ARM hardware (ARM32) for both Linux and Windows software platforms are not being officially supported by Microsoft. So, have justified expectations and be prepared to get your hands dirty with some virtual dirt. See their official statement here.

2. At the time of writing this post, it is only possible to run .Net Core code on Raspberry Pi 2 and 3, and not on Pi Zero. This is because .Net Core at the moment targets ARMv7 instruction set and above for ARM architectures. Raspberry Pi 3 uses a Broadcom BCM2837 chip which uses ARMv8 instruction set, while Raspberry Pi Zero uses BCM2835 chip which uses ARMv6 instruction set. See the official statement here.

3. There is no Software Development Kit (SDK) available at the moment that helps you develop software on Raspberry Pi for Raspberry Pi, so you will have to develop your code on a supported development environment and then copy over to Pi for execution.

Alright, if you have made this far, I am assuming that you want to give it a go.

Creation of a console application

First let’s develop our application and since this will be the first time we will be running a .Net Core project in Raspberry Pi, let’s keep things simple. Fire up your Visual Studio and then create a new console application project. I named mine as “DotNetCoreOnRPi”. Just so that we can easily identify that the things are working as desired, add a line in your main program:

Console.WriteLine("This program was created in Windows 10, but running in Raspberry Pi. :)");

Save your program and open the developer console and navigate to the folder containing your project.

Any application targeting .Net Core can be executed either as a self-contained application (Self-contained deployment) packing all the assemblies that its execution depends upon, or as an application depending on the .Net Core framework (Framework-dependent deployment). You can read more about that here. We will publish our application as a self-contained application on Raspberry Pi.

Self-contained deployment

In order for the application to execute in a supported operating system, it still needs some functionality that is supported by the targeted operating system. Raspbian Stretch operating system, the official operating system supported by the Raspberry Pi Foundation, comes missing just one essential package. Run the following code to install the “libunwind” package.

sudo apt-get install curl libunwind8 gettext

Once done we need to return to our developer prompt and publish the project targeting the “linux-arm” platform by using the following line of code:

dotnet publish -r linux-arm

This will create a folder in the bin/Debug/netcoreapp2.0 named “linux-arm”. Within the linux-arm folder will be a folder named “publish”.

Copy the entire publish folder at a suitable location in your Pi. Open a terminal window and navigate to the publish folder. Make sure that you have the appropriate permission to execute the application. You can use the following command to grant the execution permission:

chmod 755 ./DotNetCoreOnRPi

Then execute the application by using the command:



.Net Core on Raspberry Pi

For now, all the official and un-official documentation points to the fact that framework-dependent deployments are not supported. Let’s hope that Microsoft starts supporting Arm32 builds officially and we can reduce the size of our deployments by relying on the .net core framework available on a system-wide basis.