Not much is happening on the work front. We continue to forge ahead on the large integration project. We took some time in mid December to transition our code base to Git and TFS 2017. We had just released a large branch that contained about six months of work, so the end of the year was the optimal time to do that. We had some hiccups in getting build definitions working again, Octopus Deploy pointing to all the right places, etc. but by and large it was a success. Now, the work continues to integrate and process funding both up and down to our new vendor. We continue to refine our development pattern, and it’s so refreshing to step away from the text book and evaluate what we really need and not create fourteen layers until we absolutely have to. When you have a small team, all of the layers are just unnecessary complexity that make code less readable.

Dynamics & Inherited Interfaces

A coworker and I made an interesting discovery today. I was getting a SQL error back indicating that a method–one that was clearly available to my object–wasn’t being found. We’d run into this issue before and found a work around, but it wasn’t until today that we dug in figured out why it was happening.

We’re using Dapper as our ORM. We use these statements all the time.

_db.Execute(“SELECT TOP 1 FROM dbo.MyTable”);
_db.Query("UPDATE dbo.MyTable SET MyField = 'new value'");

Many times we pass in data as such:

_db.Execute(“SELECT TOP 1 FROM dbo.MyTable WHERE Id = @id”, new {id = 5});

In this case, 5 would be some strongly typed object or variable. Dapper does great with these. We created a wrapper for Dapper that gives us some extra functionality:

public interface ICoolDb : IDapperDatabase
void WithTransaction(Action action);

The above is just an example of something you could do for wrapping up a bunch of SQL statements in a transaction. No biggie, right? This is also where the issue lies. I only have one method available here. If I try to pass in a dynamic object to Execute or Query, the compiler doesn’t evaluate the available methods until runtime. When it does evaluate, it doesn’t pick up the inherited interface methods of Execute or Query from IDapperDatabase, so it throws an error. For some reason, the compiler doesn’t know to work up the inheritance tree to find those methods. #fail

_db.Execute(“SELECT TOP 1 FROM dbo.MyTable WHERE Id = @id”, new {id = 5});

This works because id is strongly typed as int. The fix that we came up with awhile back was this:

_coolDb.Execute(sql, account as object);

This casts our dynamic account object as object which makes everyone happy. I’m not sure if this same quirk applies to later versions of .NET, but we’re running 4.5 and it’s there. It’s annoying to add that as object bit in there each time I’m passing a dynamic, but I guess it’s the best we can do.

Scale Or Optimize

As documented earlier this week, I migrated my whole online ecosystem to AWS over the weekend. It was a fun project. I shared this with a co-worker at lunch today which naturally launched into a conversation about our own environment in the healthcare industry. Because of HIPPA compliance and such, it’s probably not a worthwhile endeavor (for now at least). Our conversation led to scale in general. Does every app really need to scale to the level that AWS allows? Probably not. We own our servers, and, as far as I know, we have maybe three boxes…not even enough to fill a rack. Maybe there’s more, but we support tons of stuff on those three boxes. AWS is way overkill for my measly WordPress site, but it was fun to build and get some experience in nonetheless.

So, should we plan for scaling? Or should we optimize our existing code base to be as performant as possible while minimizing our additional overhead in load-balancers and other such fun magic? We chose to simplify. Less code to maintain means we get to do more fun projects in the future because we have far less cognitive load in keeping up an increasingly complex code base that has more room for bugs. Fewer lines and less complicated architecture keeps smells from creeping in because there’s no place for it to hide.

Migrating to AWS

I spent some time over the course of this past week migrating my entire online ecosystem to AWS. There was talk at work about using the AWS SQS for some pushes that we have for various objects to various back-end service providers. I already have a CDN at AWS, so I took the opportunity to migrate a piece at a time.

I started by moving DNS to Route53. I created a new CloudFront for my static site and requested some new certificates using ACM. Once I got certs applied to the new CloudFront instance, I moved my static site (this one) to S3. I shuffled the DNS around to point to the right place. I’d spent some time awhile back with GitLab’s pipelines to automatically deploy this Jekyll site. Now I’ve got a dilemma: at this point because I couldn’t automatically deploy to S3 from GitLab. I could; I just wasn’t sure how. More on this later.

The other project I took on was getting familiar with RDS and EC2. I started by pointing the existing site (not on AWS) to the RDS instance. I got that working, then recreated my WordPress install on EC2. I got that working, but certs were broken… 😕 The whole reason I switched to ACM was to keep from having to upload new certs through the AWS CLI each time I needed to renew. After some quick research, I discovered that ACM can manage certs automatically but only for Elastic Beanstalk and load balanced instances. Oi! I guess I’m going to get real comfortable with AWS!

I did some reading on ELB and got everything sorted out there for a WordPress install. After some trial and error, I finally got it working on Saturday morning. This was the trickiest part, not due to anything AWS related, but in how I wanted my SSL to work. I don’t want anything non-SSL on any site. After some finagling of the WordPress database, and a lot of tinkering with the ELB environment and load balancer, I’ve got it working exactly how I want it.

Now, back to CI for my static site. Naturally I started with the code repository. I moved the repo to CodeCommit, created a CodeBuild project, then automated all of it with CodePipeline. This was the easiest part of the whole process, and it put a big smile on my face. Even this post has been automatically deployed using all of the magic above. Git push, and we’re on our way.

To summarize:

  • Moved DNS to Route53
  • Moved SSL certs to ACM
  • Moved static professional site to S3
  • Moved WordPress family site to ELB with load balancer (like we’ll ever need it 😉 )
  • Moved repos for both sites to CodeCommit
  • Automated deployment for each site to ELB and S3 with CodeBuild and CodePipeline respectively
  • All of the above include establishing IAM roles and policies to ensure appropriate access for every service involved in the processes

After some lengthy evenings (in the midst of trying to buy a house!), I was done…at least for now. It was frustrating at times because I was in some very new and foreign territory to me. Thanks to lots of tutorials, I was successful.

I was very pleased with the entire experience, though. Amazon has made it incredibly easy to onboard into their ecosystem…almost too easy. It was an extremely fun project. Their documentation is stellar. The community around it is also very active and seems to provide lots of helpful tips. I’m not on a paid support plan, so it took a little bit of digging to find the answers I needed, but I can’t see myself moving from AWS for quite awhile.

Why I Deleted My LinkedIn Account

I pulled the trigger on deleting my LinkedIn account just last week. Some people might think this is crazy in time when social networking is pretty much a given in the world. I don’t use Facebook for much of anything these days. I might use it for authentication to a site or service if I’m feeling lazy. Twitter gets the same treatment. I have a very slim online profile for privacy concerns above all else, but secondarily, I just don’t want the cognitive load. I like to own my information, own my writing, own my eyeballs…

Let me preface this next paragraph. Recruiters are great, and I’m thankful that I have my current job through one.

That said, with the number of recruiter calls that I got last year and even continuing on into this year despite my being very happy with my job and not actively searching, I started to wonder why they might think I’m interested in a new job. Like most of you, I routinely brush up my resume on LinkedIn to keep it current and document various projects I work on. I wonder if LinkedIn has some sort of mechanism that puts that info into recruiter’s hands even though I chose the “don’t publicize this update option”. As I said, recruiters are awesome. I’m just not in a season that I want to entertain anything but once in a lifetime opportunities.

If you’re looking for me out on the internet, this is my home. I can’t share much of what I do as it’s HIPPA protected, but I’ll try to share some snippets and thoughts from time to time here.

Refactor or Hack

In my day to day, I often come across code that was written before I arrived. Sometimes it’s great. Easy to understand, abstracted at an appropriate level, maybe even commented (gasp!). Sometimes, though, it’s been written poorly. If the person is still with the company, I can just ask about their intentions or for clarity if something is easily followed. The worst case is when the code is written poorly and by someone who is no longer with the company. Now I have code that is hard to understand, the person that wrote it is gone, and I’m being asked to either fix or implement a new feature utilizing pieces of this existing code.

As I dig further in to understand what’s happened before me, I realize that there are a multitude of issues with the existing code. It didn’t account for this case or that business rule that it was supposed to. It’s duplicating data in two tables or it’s written in an unintelligible way in order to be “fancy”. I like fancy code as much as the next developer, but more often than not, simplicity is best. Me: 0. Technical Debt: 1.

This brings me to the all-important question: Refactor or hack? Should I take the time to refactor the existing code, clean it up, and make it easier for the next person that comes along or should I hack at the existing code to deliver the feature or bug fix on time and on budget? It’s difficult for a developer to answer this question because most of us want to do a good job and create the best possible code base even if it takes longer. We take pride in our work and genuinely want to do a good job. Often the pressure from the business side reveals that they don’t care how well the code works under the covers. They just want it to work and work today.

I’m fortunate to have a great business team that understands the long game and genuinely wants an easily maintainable code base, but I still struggle with “how long has this story been kicked down the sprint road because of other broken places in the code?” I don’t have an answer for this question at the moment. It’s just a thought that’s been tumbling around in my mind as I’ve added at least three stories to refactor large portions of code in an effort to simplify and make go-forward maintenance easier.

Working with Knockout

Much of the last two weeks has been working with Knockout and Kendo. These two do not play nicely together. The application that I support has seen a lot of “churn” in terms of what technologies are being used which can be both good and bad. As a new person, it’s difficult to know what to use and when and where. The more senior developers are better able to answer the question on a case by case basis and know why you would pick one over the other. They would also know why it doesn’t make much sense to go back and convert everything…

Working with Knockout JS has been a generally positive experience. I’m still trying to grasp the entire flow from DB→POCO→JSON→Browser, but from what I can gather, there’s a bit of plumbing that wires up all the pieces and parts properly. The really nice thing about KO is the provision of a mapping plugin. Get your JSON object back and shove it into a Javascript view model to update the UI. Done.

Own Your Own Business

I’ve had this dream in my heart to be in business for myself for quite some time. For a very short period of time I was in business for myself. I had a Tax ID and everything. I did market research for a fitness product called a kettle bell. This wasn’t just any kettle bell, though. It was adjustable. At the time, there was nothing else like it being sold on Amazon. So, what does any halfway decent entrepreneur do in this situation? Validate the idea. I threw up a simple landing page, put out some Facebook ads to drive traffic to the landing page and started collecting emails. My target audience was 18-25 year olds that liked CrossFit on their profiles. I had about one million potential customers. At $150 per unit, there was potentially a profitable business at hand. After a week, my idea was validated. I purchased 50 of these things, got them through customs, onto freight trucks, and into a fulfillment center ready to ship out to customers. Through some tough lessons learned, margin per unit turned out to be about $20. This isn’t terrible if you’re able to sell dozens each month, but I just didn’t have the cash flow in order to do that. I sold all 50 that I’d ordered and watched the other larger companies in the mix lower their prices to compete and push the smaller companies (mine) out of business.

I have no regrets about this whole experience, but here’s the most valuable lesson I learned during the whole process: everyone is already in business for themselves. There are only two major categories that are sold: products or services. If you’re like me, you are a software developer and you “sell” a mix of both. Software products are sold to clients (internal and external), and services are sold in terms of ongoing maintenance and support of those products. As an individual in the business world, whether as a full-time employee, contractor, or consultant, you sell your own individual expertise and abilities to employers or clients every day. The trick is to constantly look at your job this way and as Shawn Blanc says, “show up every day”. Earn the trust and confidence of your audience whether that be your product owner, manager, or teammates. Conduct yourself on a day to day basis as if you were an individual business owner earning his salary. If you want to own your business, it starts with the mindset at your current job. When the time comes along to strike out on your own with your own business, all of these soft skills will be invaluable.


I finally got my own portfolio going. I built it with a little bit of help from some well supported projects that you can take a look at on the about page.

I’m excited to have a space that will hopefully demonstrate some of my development ability. As I’ve looked for jobs over the years, it’s one thing to say you know how to build a site or an app. It’s quite another to have built a site using some popular tools, host your own CDN files, and add SSL to all domains in the pipeline. Granted these are probably seasoned professionals, but they were new to me, and now they are a part of my technical abilities.

If you’d like a more resume-y style perspective on what I’ve done check out my career history.

Let’s Encrypt Nginx & AWS Cloudfront

As I learn so many different technologies, I figured it would be helpful to set up a sub domain for things not directly related to the Walsh family. In typical Stephen-like fashion, I didn’t half-ass it. It’s probably overkill, over-engineered, and way more complicated than it needs to be, but that’s why you love me. ♥️

High Level

If you don’t want to read through the more technical steps, take a look at the table below to get the high level. In a nutshell, I wanted simple for the top-level domain (i.e. WordPress), and I wanted something that I could play around with and keep some of my professional work grouped together on a sub-domain that wasn’t WordPress as a working portfolio of sorts.

Domain Location SSL’d Nginx on Digital Ocean SSL’d Nginx on Digital Ocean SSL’d S3/Cloudfront

Again for those that want to skim, here are links to each of the tutorials that I followed to get the above setup. I’m a big fan of Digital Ocean, and you should be too 🙂


  • Ubuntu 16.04 initial server setup1
  • Nginx server blocks (for the top level and sub domains on Digital Ocean)2
  • SSL Encryption with Let’s Encrypt for Nginx3
  • S3 Static Site CloudFront CDN4
  • Let’s Encrypt S3/CloudFront CDN5
  • MySQL backup to S36

Now, here are the specifics of how I set things up. I hope this is beneficial in some way to others out there that want to run a secure, “light-weight” infrastructure, that costs less than $15/month. I’m assuming that you know how to get around comfortably on the command line, know how to make some basic DNS adjustments when needed, and have a “can-do” attitude because you’ll more than likely enter something incorrectly like I did and have to back-track. Get comfortable with snap-shotting your server so you can backup without completely destroying all of your work.

Ubuntu 16.04 initial server setup

I’ve used Ubuntu for years. I’ve tried other stuff. It just seems to keep trucking along with no signs of stopping. There is nothing real fancy in this tutorial that was extra special to my setup.

Nginx server blocks to setup a sub domain hosted on the same server

I’ve not used Nginx for WordPress up until recently. I’ve been happy with how easy it is to setup even with SSL (which we’ll get to later). Nothing real important to note here. I’ll give some more detail on the server blocks in the below section because they have SSL directives which won’t make sense out of the context of encryption.

SSL Encryption with Let’s Encrypt for Nginx

This is where things start to get a bit more custom and interesting. Let’s Encrypt has not provided any “out of the box” functionality for Nginx. It’s not wholly unsupported, but it does require a bit of know how. First, this tutorial is going to ask you to install Let’s Encrypt manually via a Git package. This is not necessary. Simply run the apt-get command. If you have a sudo user setup (which you should!), then this will run without a hitch.

sudo apt-get install letsencrypt

Running from the package manager makes running some of the other commands here quite a bit easier, instead of:

cd /opt/letsencrypt

You just get do:

sudo letsencrypt --foobar command

When you get to the point of actually wanting to generate your certs, you run this:

sudo letsencrypt certonly --webroot \
-w /var/www/ -d -d \
-w /var/www/ -d

You’ll notice I didn’t include the S3/Cloudfront. There’s a reason for that which we’ll get to shortly. Now for some server blocks. If you followed the guide above, you’ll have three sites in /etc/nginx/sites-available. Now for their server blocks.

Default server block

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;

        # Add index.php to the list if you are using PHP
        index index.php index.html index.htm index.nginx-debian.html;

        server_name _;      

        # other configuration below

Top level domain server blocks

server {
        listen 80;
        listen [::]:80;
        return 301 https://$server_name$request_uri; # NOTE this line. It may have to be commented out, then commented in after the S3/CloudFront configuration

server {
        # SSL configuration

        listen 443 ssl http2 default_server;
        listen [::]:443 ssl http2 default_server;
        include snippets/;
        include snippets/ssl-params.conf;

        root /var/www/;

        # Add index.php to the list if you are using PHP
        index index.php index.html index.htm index.nginx-debian.html;


        # other configuration below

Sub domain server blocks

server {
        listen 80;
        listen [::]:80;
        return 301 https://$server_name$request_uri;

server {
        # SSL configuration

        listen 443 ssl http2;
        listen [::]:443 ssl http2;
        include snippets/;  # NOTE that this is the same file as the top level domain based on the tutorial
        include snippets/ssl-params.conf;

        root /var/www/;

        # Add index.php to the list if you are using PHP
        index index.php index.html index.htm index.nginx-debian.html;


        # other configuration below

Make sure you point your top level and sub domain server blocks to the same snippets for and ssl-params.conf. Note also that you have to comment out the 301 redirect for your top level if the S3/CloudFront configuration doesn’t work as expected. This line reroutes traffic from a standard http:// request to the https:// endpoint which doesn’t yet exist for the S3/CloudFront sub domain.

Check your work

Can you get to both domains with an encrypted connection? If you followed the server block tutorial, you should have some test index.html files to validate that your domain and sub domain are routing and working properly with SSL. Try it with http:// to make sure it’s rerouting. If so take a snapshot of your VPS. I took about 5-7 snapshots at various points throughout this process in case I messed something up.

S3 Static Site CloudFront CDN

Setting up a static site was pretty straightforward. Just be mindful that I am not using Amazon Route 53 for DNS, so I skipped those steps in this tutorial. I did a standard CNAME pointer from my sub domain to the CloudFront given domain name. Make sure you can get the CloudFront provided URL before moving forward. It may take 15-20 minutes to provision everything and be ready for traffic. Remember that you are creating a bucket for your sub domain and title it appropriately. In my case this was Since I’m not using a top level domain and www style domain in S3, I had to adjust the tutorial to suit my needs.

Let’s Encrypt S3/CloudFront CDN

This tutorial was a great help, but I piece-mealed from some other sites as well. The first thing you’ll want to do is install the AWS CLI. This will require you to install python-pip from apt-get if you’re using Ubuntu. While you’re waiting on that, go set up a new IAM user on AWS and give it an appropriate policy to upload certificates.

To recap, we have the AWS CLI installed and we have an IAM user ready to send a not yet generated certificate to our AWS CloudFront. Still with me? If you haven’t already, take a snap shot before we start working on this next section.

First, you should already have a terminal open from which you’ve been running all of these other commands. Open a second one. This is important. When you run the next letsencrypt command it’s going to validate a file in your CloudFront. You must place that file there with the AWS CLI prior to hitting Enter on the first terminal window. We’ll generate that file in just a bit. Ready?

 sudo letsencrypt certonly --manual

This is going to give you a prompt for your CloudFront sub domain (i.e. Remember when we didn’t generate a certificate for this domain earlier? We did that because wanted to run this manually. It’s special. Follow the prompts and you’ll get something that looks like this:

Make sure your web server displays the following content at
http://your-site.whatever/.well-known/acme-challenge/some_long_path before continuing:


Content-Type header MUST be set to text/plain.

... <snip> ...
Press ENTER to continue

Notice it says, “Press ENTER to continue”. Don’t do that yet. We have to copy the acme-challenge file up to your S3 bucket first. If you ran the `aws configure’ command with your new IAM role, great. If not, go ahead and run this in a separate terminal window:

aws configure

It’s going to prompt you for your AWS Access Key ID, AWS Secret Access Key, and your Default region name. You should enter the default region for your S3 bucket. When you’ve configured this correctly, you can view your credentials and config files in ~/.aws just to make sure they are correct.

Now we’re ready to put the acme-challenge file into our S3 bucket. Store the some_long_string file in a temp file first ensuring that you’ve replaced some_long_string with the long string from your first terminal window.

printf "%s" some_long_string > /tmp/acme-challenge

Then upload the verification file to your S3 bucket ensuring that you’ve replaced some_long_path with the long path from your first terminal window.

aws s3 cp \
/tmp/acme-challenge \
s3://your_s3_bucket_name/.well-known/acme-challenge/some_long_path \
--content-type text/plain

Sanity check that the file is there.

curl -D - http://your-site.whatever/.well-known/acme-challenge/some_long_path

I had a bit of trouble with this. I was able to successfully upload the file to my S3 bucket, but for some reason I couldn’t get to it with my sub domain address. Remember that line that we noted on our top level domain server block? Try commenting that out and restarting Nginx. You can also set each level of that URL in your S3 bucket to public by clicking the checkbox, then going to actions and selecting “Make Public”. Do this starting with .well-known, acme-challenge, and the actual some_long_path file itself. Try loading the file up in a private browser section with no caches until it works. Then run the curl command above. When you’re able to view that file, then you can press enter on your first terminal window. If you get a “Self-verify of challenge failed.” error from Let’s Encrypt, it’s more than likely because it can’t get to that file. Read the description of the error and troubleshoot as necessary. It’s a pretty detailed log of what happened.

Assuming that you’ve passed the self verify, you should get a message like Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/your-site.whatever/fullchain.pem. Now you can upload your certificate for use in CloudFront. It took some trial and error, but the following worked for me:

sudo aws iam upload-server-certificate \
--server-certificate-name your-site.whatever \
--certificate-body file:///etc/letsencrypt/live/your-site.whatever/cert.pem \
--private-key file:///etc/letsencrypt/live/your-site.whatever/privkey.pem \
--certificate-chain file:///etc/letsencrypt/live/your-site.whatever/chain.pem \
--path /cloudfront/

Obviously you’ll want to make sure that the your-site.whatever directory is where your certificates are stored. Also mind the forward slashes and the third / in the file:/// directive. My certs are stored in /etc/letsencrypt, so I needed that third one to tell the AWS CLI where it was supposed to start looking for the file. If it works you’ll get a confirmation. Now head on over to CloudFront and select the proper certificate that you just uploaded under the “Distribution Settings” and complete the remainder of Nathan Parry’s tutorial for Step 4.

If everything is working, take a snap shot. 📸

MySQL backup to S3

Home stretch! Once you’ve got all of these pieces in place, you can finish up the WordPress installation if that’s what you’re using. If not, you’re free to go. 🏃 If you are using WordPress, let’s backup up that precious MySQL database to S3 as well. Go create another IAM user, and give it AmazonS3FullAccess. I created a separate group for S3 access so I could reuse that group later if needed. Make sure you run s3cmd --configure before creating the bash script and running it. S3CMD is going to use your other IAM user. In that bash script you’ll need to specify S3_BUNDLE. This is a bucket name. I created a separate bucket for this sole purpose. That way my MySQL backups are separate from my sub domain that was setup earlier. Make your bash script executable and run it. If all goes well, you should see a gunzipped file in your bucket. Set up cron task to automate your nightly or weekly backups, and you’re off to the races.


Whew! That was a lot. Here’s what we accomplished:

  1. We setup an Nginx server that hosts two domains (one top level, and one sub).
  2. We secured both domains with SSL certificates from Let’s Encrypt.
  3. We created some IAM users to upload certificates and MySQL backups.
  4. We created a static S3 bucket behind a CloudFront for content delivery.
  5. We secured our sub domain that points to our CloudFront CDN with an SSL certificate from Let’s Encrypt.
  6. We automated certificate renewal and MySQL backups with cron tasks.

This seems like a lot of work, but securing your server and sub-domains is a good practice to get in. Let me know if there’s anything I missed or need to correct in the comments.