Skip to main content

Docusaurus Site Upgrade

· One min read

Yep, I've upgraded the site again.

Sadly, my love relationship with Haskell has ended. I really enjoyed learning the language and it definitely taught me a ton about how to write better code, but ultimately (IMO), it's just too hard to get work done efficiently with it. (Perhaps it's because I'm not smart enough.) I just found the lack of libraries to interface with modern systems like google apis and clunking around with the language warts to be too much of a show stopper.

At the same time, I also discovered that rust actually has most of what I liked about Haskell without all the pain that comes along with it, so I'm fully on the rust train now!

But anyways! This wasn't a post about my break-up with Haskell, it was a post about upgrading my site! Emanote was cool, but required using Nix, which I've also found to be quite aggrevating, so I've moved on to Docusaurus!

So far, its quite enjoyable and transitioning over was pretty simple. So yea, there ya go!

Docker is using up all of my storage!

· 2 min read

So the other day, I was installing some new vsts to my computer, when I noticed on "This PC" that my main system drive only had 32gb of storage space remaining. My system hd isn't huge, but I have around 500gb of space and I was pretty sure I wasn't using all of this. Upon digging around, I eventually was lead to docker's ext4.vhd using up a whopping 160gb! How did this happen?

Note: This is on windows.

Docker doesn't give you your space back

This solution comes from: https://stackoverflow.com/questions/64068185/docker-image-taking-up-space-after-deletion

Docker uses a wsl and virtual hds to storage it's images and containers (or whatever). Apparently, even when you prune the system, the vhd doesn't recompact, it just continues to use up all the space. So in order to fix this, you need to manually compact the vhd yourself.

Step 1. First, Prune your system to clear out unused stuff

docker system prune -a
docker volume rm $(docker volume ls -q -f dangling=true)

Step 2. Shut down docker server (if you're using docker-desktop, just quit out).
Also shutdown wsl

wsl --shutdown

Step 3. Compact the vhd (for me, located in C:\Users{YourUser}\AppData\Local\Docker\wsl\data)

diskpart
select vdisk file="C:\Users\{YourUser}\AppData\Local\Docker\wsl\data\ext4.vhdx"
attach vdisk readonly
compact vdisk
detach vdisk
exit

And thats it! My docker vhd file went from taking up 160gb of space to only 11gb! Hope this helps!

My setup for hacking on GHC with VsCode

· 4 min read

I haven't posted on here in a good while, but since this blog also serves as a sort of "backup" for my brain, I wanted to take a second to jot down my ghc environment setup.

I've long wanted to get into compiler development, and I find that I've always done better by diving in rather than poking around at the edges; I think I'm good enough at Haskell to take a crack at GHC, so here I am!

After reading through the machine prep section of the ghc wiki (which is vast but also immensely helpful), along with the aid of blog post from terrorjack, I decided I would alter the setup slightly to hopefully achieve something thats a little more ergonomic. For me, the path of least resistance is to use a docker container.

Lets see the config

I created the following Dockerfile (in the project root) which seems to work great, and IDE support is working out of the box.

FROM registry.gitlab.haskell.org/ghc/ci-images/x86_64-linux-deb10:9e4c540d9e4972a36291dfdf81f079f37d748890

# Install ghcup
RUN curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org | sh

# Add ghcup bin to path so haskell ide extension can find
# the all of the ghc binaries
ENV PATH=${PATH}:/home/ghc/.ghcup/bin

# We override these variables that way `hadrian` will use
# the ghc binaries provided by ghcup instead of those in `/opt`
ENV CABAL=/home/ghc/.ghcup/bin/cabal
ENV GHC=/home/ghc/.ghcup/bin/ghc

# Finally explicitly install HLS so its ready for us right away!
RUN ghcup install hls

NOTE: The hash we use for the docker image changes from time to time. Refer to the ghc CI config for the latest hash

In addition to the dockerfile, I also use the following devcontainer.json file (located at: <ghc-root>/.devcontainer/devcontainer.json)

{
"name": "Existing Dockerfile",

// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",

// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile",
"customizations": {
"vscode": {
"extensions": [
"haskell.haskell"
]
}
}
}

Another NOTE: This file can be generated. Clone the ghc repo, open the folder in vscode, and then use the remote containers extension to "Open folder in a container". It will prompt you how to open it, and you pick the dockerfile created up above. You can install Haskell Language Server and then "add to devcontainer.json" and it'll be there. But I just put this here in case you're feeling lazy.

Why?

I decided to try and alter the prescribed setup mainly for the reason that HLS (the IDE) really likes ghcup to run the show. ghcup does a good job of coordinating your environment to ensure the hls and the compiler play nice together. Also, using vscode dev containers, some of the hassle of fixing userids/groupids is taken care of for you automatically.
Initially I tried ghc.nix, but I ran into some errors with hls setup. Although I love the concept of nix, I find that debugging nix configuration is extremely difficult (atleast for me it is).

Issues I ran into

Bad .hie-bios directory:

Because I first attempted to setup my ghc environment with nix, I ended up with a bad .hie-bios directory which contained stale build artifacts. This manifested as a bunch of errors to the effect of Package X is broken due to missing package 'base-4.16.1.0.

In this case, the remedy was simply to delete the .hie-bios directory and then restart HLS and let it generate a new build.

Docker on Windows doesn't work

Before trying ghc.nix, I actually tried to set up ghc on windows with docker. Unfortunately, it seems that docker is in a some-what malformed state on windows and so I ran into many compilation errors when running hadrian/build -j. Files kept failing to generate for some reason.

I may try again to get it working on windows one day, but for now, linux is the way. I run linux in a VM, but WSL (which is essentially a vm) would probably work as well.

The learning ladder

· 5 min read

This post is part 1 of a 2 part series on my thoughts about learning as it applies to those of us in the position of teaching.

There is a concept that I have named "the learning ladder", which is based on my own experiences both in the public school system and later as I ventured to understand functional programming and category theory. It goes something like this:

Imagine that learning was akin to climbing up a ladder. Each rung on the ladder represents some concept, let's be simple and say each step is a math concept. At the very bottom of the ladder, we have really basic concepts like the counting numbers 1-10. At the top of the ladder (I'm guessing here), we have stuff like topology, group theory, category theory, etc.

In the beginning, we all start off at the bottom; it's pretty obvious that we can't learn how to add or subtract if we don't yet know how to even count. This aspect of teaching is quite well understood by educators and learners for the most part. It's only natural that certain levels of instruction assume prerequisite understandings have been met. We wouldn't try to teach calculus to a student who is still learning how to multiply. That would be like setting up a ladder for someone to climb, but the first 30 rungs are missing; there's no way they're gonna jump high enough to reach that first rung so far above them.

So this is fine, we all understand this well. But the place where we get into trouble has to do with a different type of issue known as "The curse of knowledge". Simply put, the curse of knowledge is the inability to explain a concept to a learner in a way that is easy for them to understand; this is because the teacher understands the concept so well that they can no longer understand what the learner struggles with or why. But I believe the curse of knowledge really stems from one simple problem: putting the rungs on the ladder too far apart.

There's a funny video I saw a while back (which I would link here but I can't find it) with a title to the effect "How to draw a horse in 3 easy steps". In the video, the "teacher" walks you through the process: "Step 1: Draw a circle Step 2: Draw two more smaller circles, Step 3: Draw a horse" This video, though a comically extreme example, gets at the very heart of the curse of knowledge. Too many steps were skipped between 2 and 3; step 3 really should've been more like step 10.

The place I've experienced this the most (and most recently) has been amongst the functional programming communities, although I believe this is common within k-16 education system as well. It's of course not as extreme as the above example, but many articles written with the intent to teach have tended to gloss over steps during their writing. They lead the reader up the ladder:

  • Step 1
  • Step 2
  • Step 4
  • Step 5

This is equivalent to having a ladder with missing steps; these missing steps can create difficulty. If the learner is mature enough to persevere, they may be able to make the leap upward despite the deficit, but even the most mature learners may be defeated if the jump is too great.

So with all of that said, the solution to the curse seems simple: we need to be more elaborate in our explanations; ensuring that we have sufficiently lined our ladder with enough steps that learners do not have to struggle to make their ascent.

There was an article I read one time that did this very well. At the beginning, they started with some initial premise; as I read it, I thought to myself "Well that's obvious". As I continued to follow through each step of the way, everything continued to be obvious. Before I knew it, I had reached the end of the article. The whole thing absorbed piece by piece seemed absurdly simple, but suddenly I had a firm understanding of this concept which had previously been very difficult. This is the type of teaching we should strive for! Each step on the ladder so close to the previous, that we don't even realize how far we've come until we look back.

It can be really easy to skip steps that seem self-explanatory, but if we take the time to really hold the learners hand and guide them through the entire process, we will find more success in our teaching. This is the reason I started writing my book. I want to create the perfect learning ladder; create a journey that's so easy to follow from start to finish, that anyone could learn to program and think to themselves at the end "Oh so that's all this was the whole time huh?".

So that's it for this one! I hope my writing has been enlightening for someone. In part 2 of this series, I'll talk about "The M pyramid of learning".

Live code blocks in the Programming Book

· 2 min read

Its been a while since I dedicated a good block of time to working on my book. A big part of that is trying to really hone in on how to teach functions. (A thought I have about this is perhaps reading through SICP to see how they teach them, although I have my own notions anyways.)

But over the weekend, I managed to hack my way through the internals of mdbook, and drafted a pull request adding support for Purescript in the code editor. What does that mean? It means the code blocks are live! You can write Purescript code right in the book, and press the play button to run the code and see what it does.

It's not exactly perfect, but it's a drastic improvement over the old copy-paste style we had going before. A big thanks to all the brilliant engineers in the Purescript community for creating the tooling necessary to even make this possible!

This has me really excited and ready to clean up the book of all the now obsolete directions, and to tackle this functions chapter and move forward!

So yea! Just wanted to write that update. This programming book is near and dear to my heart, and I hope to bring a lot of new would-be software engineers into the fold through this book.

Cheers!

Emanote Site Upgrade

· One min read

Yep, I've upgraded the site again. Lately, I've been thinking about writing some blog posts which were less personal and more academic in nature; I didn't want to get these different types of posts mixed up though so I was holding out until I had sometime to figure out how I wanted to separate them.

But then I remembered... a while back on the Zulip Haskell chat, there was a guy named srid who had been working on some interesting note-taking methods and building tools to facilitate those endeavors. I decided to lookup his github and found that he's built this new static site generator called Emanote - it's very cool! In my experience so far, it's worked with very little issue and getting up and running was fast.

So here we are. The site's been upgraded. Looking forward to migrating my guides here!

Cheers!

Convenience is King and Polish is quality

· 6 min read

Ive noticed this strange phenomenon recently... There's a phrase that goes "Use the best tool for the job", but what does that really mean? Initially I would have thought that meant the tool with the greatest capacity to accomplish the target task, but it seems I've thought wrong. When it comes to tool selection, I've seen it time and time again that there appear to be two attributes which exhibit greater influence than capacity: convenience and polish.

Before we continue, let me define what I believe these terms mean:

  • I'd define convenience to mean how quick and easy it is to setup and start using a tool, and how much cognitive lift it takes to utilize that tool on a daily basis.
  • Polish is how nice the tools UI is, and how well it accomplishes it's intended tasks without encountering errors/bugs/malfunctions.

Convenience and polish seem to be such weighty factors in our daily lives that we're willing to give up just about anything for it; things like financial well-being, privacy, and even our health! Companies like Google, Apple, and Amazon (either consciously or instinctively) know this and use it to control our choices every day.

Anyways, I digress! I've seen it a few times that convenience and polish have won out against (what I consider to be) the better tool. Here are a few examples.

Haskell vs Blub

I discovered the joy of functional programming and Haskell in specific a few years back. The language is very fun to work in, and has probably one of the greatest capacities to write software that I've ever seen; so why has it not caught on? Up until recently, the development experience has been wrought with difficulty. Installing was hard, specifying dependencies was hard, compiling can be quite slow, and finding libraries is pretty confusing. But that's not to say that any of it was actually major blocker, rather, it's just very inconvenient. Documentation is not convenient to find, multiple installers with different issues was a hassle, and probably the biggest gotcha was the IDE experience. For the longest time, developers essentially ran an auto-compile loop in terminal as a bit of an IDE hack - definitely not convenient and about as unpolished as you can get. The Haskell community considered these to be minor issues (and maybe they are), but to the average guy, that inconvenience just makes the whole thing not worth it. Why deal with that when you could just spin up C#/JavaScript/Python/Go and you're off to the races?

In defense of the Haskell community, there have been a lot of initiatives to improve these issues lately, and the Haskell IDE experience has improved by leaps and bounds. Haskell language server installs automatically and the polish is great. With those improvements alone, I consider Haskell's dev story to be better than Purescript now. Anyways, moving on!

VsCode vs Emacs

Back when I was working at CitizenNet, there was one particular day where I was pair programming with a colleague. At the time, the defacto code editor of choice on our team was emacs (because it has a decent IDE experience for Purescript). However, I had seen how nice LSP worked on VsCode and decided to try to make the switch. Purescript's LSP implementation was in good enough shape that it worked without too much hassle. As my colleague and I worked through a ticket, the compiler spit an error at us as it usually does, showing under which file the error occured along with the line number. Normally in emacs, we would use the file open command, locate the file, and jump to the error. But in VsCode, I could ctrl+click the error location in the terminal and it took me straight there! Is that a trivial feature? Definitely! But my colleague was thoroughly impressed by it. I don't know if this was the tipping point for him or not, but he soon made the switch himself, the rest of the team following not long after.

Zulip vs Google Chat

Zulip is an amazing chat tool with an interesting model for organizing conversations. Compared to Google chat, I'd say Zulip definitely has a much greater capacity as a team communication app. However, the convenience and polish on Google chat is just a little better. Convenience-wise, Google chat wins because there's no software installation required, nor do you need to create an account for it. (This is sort of unfair, you do need a Gmail account, but Google has already secured themselves as the most popular email provider, so most people have an account by default.) Polish-wise, Zulip has had intermittent issues where notifications wouldn't come through. These issues were enough to convince my team to jump ship from Zulip to Google Chat. I'm slightly pained to see it as I'm a big fan of Zulip, but I don't blame them at all!

Erlang vs Elixir

I've been taking some time as of late to learn the BEAM languages since I hear they're pretty impressive for writing server applications. I was unsure if I should go with Elixir or Erlang, but a quick trial of each promptly led me to Elixir. What were the deciding factors? Erlang's VsCode plugins just didn't seem to work very well. ErlangLS couldn't infer type specs (it kept saying to check the logs to find out why). Also rebar was ok, but seemed a little confusing to work with. Elixir wasn't perfect in comparison, but ElixirLS plugin worked right out of the box, and mix was a good amount easier to work with too.

Conclusion

So why did I write all of this? I've long seen some discussions in the haskell community about what can be done to foster greater adoption. At the same time, I've also dealt first hand with trying to get my own team at work to adopt technologies that promised overall improvement of our processes. I (of course) would always recommand the tools I considered to be the best, but even though these tools were very good, getting the team to use them has not often been easy. Eventually, we landed on using Google Chat as our internal communication medium, and Zoho Bigin as our Project management tool (despite Bigin not actually being a Project management tool). I spent a good amount of time trying to get the team to adopt certain technologies, but it only took about a day to adopt Bigin and Google Chat. The difference was the convenience and polish. These tools may not be the most powerful ones to aid us in our work, but they were the most convenient, and they worked without issue.

If we want to foster greater adoption of any type of technology, these two attributes should not be overlooked. Indeed, I believe they may actually be the most important ones.

Starting a blog

· 3 min read

So, I finally took some time to upgrade my site from plain html/css to statically generated with some light blogging capabilities.

Why? Ive noticed that whenever I stumble upon a cool software project, it's both neat and helpful when the project devs have a blog. It provides a sort of newsfeed that is useful to guage where their current focus is at, and sometimes they have useful educational content too. I've also been wanting to play with static site generators for a while, so this provided a nice opportunity to try it out.

I don't intend to go too blog crazy, but I thought it'd be nice to follow suit and keep some updates on my current endeavors.

What's new

Currently, I have a couple of different projects I'm splitting my energy between. In general, I've got the goal in mind to atleast gain some beginner knowledge in compiler development. I find this area of computer science to be fascinating and I feel like it could aid in contributing to the Haskell programming language.

I've also recently joined forces with my long-time friend FlipCoder to help work on a cool project called TextBeat - a text based music sequencer.

Outside of programming, I've also been writing music for a D&D podcast my brother-in-law has been working on called "The Dungeon Dweebs". I've probably spent way too much money on vsl plugins, but it was 100% worth it. Writing music has long been a passion of mine, and I can't wait to put all this music up on display for people to hear! I don't have much online currently, but I have a music profile on soundcloud here.

And in my very small amount of spare time, I've really been taking an interest in StarCraft again. I've been watching tournaments every now and then (like this one), and it's just crazy how good these guys are. I never thought i'd be one of those people who's into e-sports, but I definitely am! I think it'd be fun to try and get good at this game, but as time permits. For now, I'm happy to just finish the "Legacy of the Void" campaigns.

What about your programming book?

I haven't forgotten about this guy. I'm just a bit stuck on the functions chapter right now. I've re-written it a few times but I'm just not satisfied with how its coming out. However, I have an idea thats been brewing in my head for a few weeks now and hopefully it'll come to fruition soon, write itself out for me, and then we'll be on track!

Current State of life?

Everything is going ok here. Jen is nearing the end of her grad school journey, she basically has one major semester left; the semester after that will consist of only a single class - Pretty exciting! I keep joking with her that when she finishes, I can quit my day job and pursue a career as a film composer and she'll support us lol. Grandma is doing good, she's just about half way through the age of 98 and is still running strong!

Anyways, thats it!