\{}
The joy of
programming

Compiling Emacs

published on

Let’s go bleeding edge!

We are going to compile the latest version of emacs.

Read More...

The black art of commit messages

published on

Good commit messages are the unicorn of the person maintaining legacy code. I will try to review some commit messages I’ve found over the years with its pros and cons.

I’ll break this into two categories: solving bugs and adding features.

Read More...

Ambition

published on

Now that we are hiring at Platform161 I spent some time thinking about values and culture at the workplace. There is one term that, in my opinion, is completely misunderstood. Ambition.

Warning: All the things I am about to write are my opinions and of course they are highly opinionated.

Read More...

A warm welcome to MS Sculpt keyboard

published on

I just moved to a new Microsoft Sculpt keyboard. I am more than happy with it. I wanted to move to a split keyboard for a long time but I never had the guts to try. Indeed I wanted to move to a Model 01 but this makes a good intermediate step.

Read More...

Pairing over Tmux

published on

Tmux is an awesome tool. One of its greatest uses is that is allows us to pair if you share the session. Anyway the setup is always a little bit complicated if you want to do it really right. Let’s see what we can do.

Read More...

Book notes

These are the notes for the books I’m reading or have already read Deep learning with python

Building Ethereum DApps

First examples It is funny because I am using this chapter to learn that Ethereum has now checksummed addesses which have both uppercase and lowercase addesses For exmaple: 0xca35b7d915458ef540ade6068dfe2f44e8fa733c That is not a checksummed address and I cannot hardcode it. In the book the addresses are all without checksum yet so we cannot use them. It is fun because I didn’t find any online checksummer. I am building my own ¯_(ツ)_/¯

Deep learning with Python

Chapter 1: What is deep learning Skimmed. AAAA Deep Learning: Multiple layers of learning Shallow Learning: Small layers of learning. Each layer of the network has an increasing level of abstraction The goal for us building the neural network is finding the correct values for the weights of the neurons in the layers. We have a loss function that is in charge of measuring how far we are for the expected result to adjust the weights. Read More...