Categories Tags


In a *nix systems, you are able to change the actual permissions by using a command line command. The problem with this is the fact that you don't have a nice GUI like Windows does to click and choose what permissions goes where.

The "777" actually represents two things. Each digit in the number represents a bit string for the actual permissions (see below), but also each character means something. The sevens are broken down as follows:

  1. Digit 1 = user
  2. Digit 2 = group
  3. Digit 3 = world

Depending on what each are set to, the user/group/world will have the permissions that it was set to.

Now, what do each piece of the bit string mean? A "1" means that the value is allowed and a "0" means that it is not allowed. So, starting from left to right:

  1. Digit 1 = Read
  2. Digit 2 = Write
  3. Digit 3 = Execute

So, if you want a user to have read permissions only on the file, you would have a binary string of "100" which is 4, so the file permission would be 4xx (where xx doesn't matter because we are only talking about user).

The lazy man's file permission is "777" because you are essentially everyone access to do whatever they want to do with the file.

Posted in tools


So, for the past little while I've been plagued with the inability to copy and paste code/data on a remote machien. This has been bothering me for the last little bit. Finally today I decided to actually try to resolve the issue.

So, I went to my trusty source Google to tell me how to fix the issue, and I found out a little more about how it works.

Apparently when you RDP to a server, and try to copy and paste, it is using an application called "rdpclip.exe". When copying and pasting fails, all you really need to do it restart this application.

So, to do that you can just type into command prompt the following:

taskkill.exe /F /im rdpclip.exe & rdpclip.exe

This will kill any of the currently active versions of the application, and then restart it.

Happy pasting!


Posted in tools


No tags found

Today I was working on fixing up my continuous integration server (running GitLab’s CI multi-runner).  I was making my own custom image based off of the core Ubuntu 16.04 image and all was working perfectly except for the fact that the mysql service would not start up.

This bothered me.  I knew that I had the image working before I rebuilt the server, so why weren’t my images working now?  Reading through the error logs didn’t really tell me much, but it did end up pointing me in the right direction.

Turns out my host (the machine which was running the docker instance) didn’t have any swap space setup and because of that, the mysql service was not able to run.  So, to fix that I ran the following:

  1. Run dd if=/dev/zero of=/swapfile bs=1M count=1024
  2. Run mkswap /swapfile
  3. Run swapon /swapfile
  4. Add this line /swapfile swap swap defaults 0 0 to /etc/fstab

I can’t take the credit for the solution, but hopefully it saves you time!


Posted in tools


I’m going to start by asking a question … are you a jack of all trades?  Are you the best at everything you do?  Or, are there people who know better than you?  This same question holds true with teams you are on.

When you hire a team, or are brought onto a team, the people that are chosen are normally there to expand on the current set of skills that the team has available to them.  So, I want to ask, why not trust them to do what they were brought on to do?

Lots of people get lost in the fact that they want to lead a team and by doing that try to control each of the interactions whic occur on it.  This sense of control could make it feel like the project is going smoothly and efficiently, but at the say time will likely start to make people feel untrusted which in turn affects productivity.  If people are constantly looking over their shoulders, to see if they are being monitored, or feeling pressured to work, the work that they produce will suffer.  But not only that, their desire to exceed expectations will likely also diminish.

Instead, in a team situation, you need to really stop and let your just work and do what they were brought on to do.

Of course, there will be times that this trust has been abused but those times generally followed up by some tough conversations where the situation is addressed.  If it isn’t, it still is not a reason to not trust the team, it is an issue for the team leads (or managers) to deal with.  Lack of trust breeds negativity in the work environment which in turn is counterproductive.

I would like to encourage you to trust those with whom you work.  Your team will end up being a lot more productive if you can.

Posted in tools


I’ve always been a lover of the command line and for some weird reason whatever environment I am in, I like to boot up applications from terminal.

This goes from the simplest of tasks, such as opening a Finder window from the current directory:

open .

To opening my text editor:

subl .

However, in order to do the previous one, you actually need to set up your Mac with this command.  You might think that it is difficult but it is not.  It’s as simple as running the following command:

ln -s /Applications/Sublime\ /usr/local/bin/subl

This will make a symbolic link between the Sublime Text app (their command which is built-in) and a folder which you normally have on your path.

Posted in tools


Throughout university, with all of the individual projects that you do, you learn that you as an individual are able build some pretty neat things on your own.  And if you were like me, completely dread any sort of team work.  This of course was because of the fact that team projects in university typically ended up with one or two people doing all of the work (which in my case was normally me).  This was a great learning experience for me but it also scared me a little about working on teams.

When going into my first job out of university I was going to be put on a sprint team, I entered a little hesitant, but being right out of school I knew I had a lot of things still to learn so I was all in!

I joined the team as a Junior Software Developer, the team I was on consisted of both several Seniors each of which had different specialities.  I was excited!  Not only was I given the unique opportunity to work with people and actually choose what area I was going to specialize in.  At the time, I didn't know what I wanted to specialize in all I wanted to do was code, and learn along the way.  That is exactly what I did.

The team that I was now on worked way more efficiently than I could have ever imagined.  But not just that, the collaboration and actual team work that went on was amazing.  Through design and implementation discussions and working with various members of the team, I was able to learn skills that would help me to excel at my career.  This was all due to the fact that on the team, everyone was able to check their egos as the door.  Had this not been the case, well, the experience and lessons learned would have been significantly different.

However, working in teams can also be a huge challenge.  There are people who always like to be heard and have their stuff be built the way they imagined it.  Situations like these can end up leading to poor decisions because the best work that comes out of a team is work that is built through a collaborative effort as it does the following:

  •  satisfies people's desire for their opinions to be heard
  •  combines the experiences of everyone involved to get a much better solution 
  •  promotes further team work and collaboration

One thing that you always have to remember when working in a team is the fact that every member of the person of the team is a person just like you.  So remember to treat them that way.  The team will work more cohesively if everyone is given the same respect that you would like to be given.

In conclusion, teamwork does not need to be as painful as it was in group projects throughout school, yes it can be, but it can be a very fruitful and fulfilling experience depending on what you put into it!

Posted in tools


Lots of times when I talk to site owners, one of my first question that I ask them is "what is your backup strategy?"  And shockingly enough I've seen so many with strategies that would make me not want to sleep at night.

I've seen people who are so sure in their code that they decide that they don't actually need to make backups.  Which just tells me how immature they are.  They really think the only thing that could affect their code's well-being is the code itself?  How about the thing that it runs on (i.e. the server)?  They crash from time to time and you lose data.

The next "backup strategy" that I've seen is people use cPanel or some other tool in order to make a local backup of the site and just leave it there.  That's great if your database gets hacked or messed upend you have to revert it. However, what happens if your server goes into an unrecoverable state?  Well then, you are out of luck because you put all of your eggs in one basket.  The funny thing is, sometimes these peoples backup requires them to manually go in and click a button.  I personally would at least look for a somewhat automated approach to doing this because the last thing you would want to do is have to revert your site's content back 1-2 months because you "forgot to take a backup".

Now that we have gone through two of the most common backup "strategies" that I have seen, let me talk about some that I would prefer to see people having.


  • Source Control - in this day in age there really isn't a reason why your code couldn't be in source control.  You can choose the flavour that you would like (be it subversion, git, mercurial, etc), there are normally free services available for private source control repositories.  This means that repopulating your code is a simple "get/pull" command.
  • Offsite Backup - Normally fairly cheap to set up (assuming you know how), you can get a virtual private server (VPS) where you just "rsync" or "scp" data from one server to the other.  As long as this is in a different data centre than the main hosting service is, the likelihood of both going down is much lower.  Your recovery strategy would be copy the latest code back to your server and deploy the database.


  • Master/Slave Server - A master/slave server will allow for a constant syncing of your database between two servers.  The actual name varies depending on the platform however they all use the same concepts.  Every X minutes the slave server will poll the master server asking for any new/updated data which it then replays these changes on it's local database and everything.  Then if you are really paranoid (like I am), you could have a backup run off of the slave server every so often that way you have an extra copy of the complete database (which can be stored externally) and it doesn't impact the performance of your actual application.
  • Offsite Backup - Similar to code, a database backup is fairly easy to export and ship to another location.  It can all reside in a single file.   Zipping this up and secure copying it to another server will have yet again a bit more overhead when you are forced go into disaster recovery mode.  But, it will at least protect you in the unfortunate event of a server failure.

If you don't take much away from this article, there are two key points that I want you to take away.  

They are:

  • Servers fail, make sure you have a backup plan if your primary server goes down
  • Don't rely on someone remembering to take a backup of something.  Because it will be forgotten.  Set up some regular scheduled jobs to make your backups and lock them up in a safe facility for you.

I hope you found this informative!

Thanks for reading.

Posted in tools


In a data load, we receive some zipped files (*.gz) I found that when you use the regular extraction:

gzip.exe -d -f "foo.csv.gz"

The “foo.csv.gz” file will automatically be deleted by gzip and replace it with “foo.csv”

However, if we are archiving these csv files it will take up a fair bit more space, so we wanted to just store the archived version of the flat file.

Originally I thought that I was going to need to unzip them, then rezip all of the files at the end of the process to get the files ready for archival but this took was just a lot of extra processing.

Finally I came across a way to avoid the deletion of the file If you output the file to standard out you are able to then redirect that output stream into your output file (without affecting the original .gz file).


gzip.exe -d -f -c "foo.csv.gz" > foo.csv

This will take all of the contents from the gz file and push it to STDOUT and then with using the >, we redirect the information written to STDOUT to the CSV file where we need it.

Hope this helps


Posted in tools