Cody Casterline
Older Posts Profile Feed Home

Deno Embedder

2023-02-13 19:35:13 -0800

I've really been enjoying writing code in Deno. It does a great job of removing barriers to just writing some code. You can open up a text file, import some dependencies from URLs, and deno run it to get stuff done really quickly.

One nice thing is that Deno will walk all the transitive dependencies of any of your code, download them, and cache them. So even if your single file actually stands on the shoulders of giant( dependency tree)s, you still get to just treat it as a single script file that you want to run.

You can deno run foo.ts or deno install https://url.to/foo.ts and everything is pretty painless. My favorite is that you can even deno compile foo.ts to bundle up all of those transitive dependencies into a self-contained executable for folks who don't have/want Deno.

Well... almost.

This doesn't work if you're writing something that needs access to static data files, though. The problem is that Deno's cache resolution mechanism only works for code files (.ts, .js, .tsx, .jsx and more recently, .json). So if you want to include an index.html or style.css or image.jpg, you're stuck with either reading it from disk or fetching it from the network.

If you fetch from disk, deno run <remoteUrl> doesn't work, and if you fetch from the network, your application can't work in disconnected environments. (Not to mention the overhead of constantly re-fetching network resources every time your application needs them.)

In FeoBlog, I've been using the rust-embed crate, which works well. I was a bit surprised that I didn't find anything that was quite as easy to use in Deno. So I wrote it myself!

Deno Embedder follows a pattern I first saw in Fresh: You run a development server that automatically (re)generates code for you during development. Once you're finished changing things, you commit both your changes AND the generated code, and deploy that.

In Fresh's case, the generated code is (I think?) just the fresh.gen.ts file which contains metadata about all of the web site's routes, and their corresponding .tsx files.

Deno Embedder instead will create a directory of .ts files containing base64-encoded, (possibly) compressed copies of files from some other source directory. These .ts files are perfectly cacheable by Deno, so will automatically get picked up by deno run, deno install, deno compile, etc.

I'm enjoying using it for another personal project I'm working on. I really like the model of creating a single executable that contains all of its dependencies, and this makes it a lot easier. Let me know if you end up using it!

My New ASUS Router Wants To Spy On Me

2023-01-26 19:15:12 -0800

After a recommendation from coworkers, and reading/watching some reviews online, I decided to get a new router. I purchased the "ASUS Rapture GT-AXE11000 WiFi6E" router in particular for its nice network analytics and QoS features.

On unpacking and setting up said router, I'm disappointed to find that the features I purchased the router for require that I give network analytics data over to a third-party.

image.png

The last line of that popup says:

If you would like to disable sharing your information with Trend Micro through the above functions, please go to: Router web GUI > Advanced settings > Administration > Privacy

For a brief few seconds I was naive enough to think that the issue was just that this behavior was opt-out instead of opt-in. So I headed over to the Privacy settings to opt out.

image.2.png

However, please note that such features/functions may not work if you stop sharing your information with Trend Micro.

"May not work" my ass! If you withdraw consent it just disables the features entirely, and then tells you:

image.3.png

Please note that users are required to agree to share their information before using [the features that I bought this router for].

At least now (after a couple router restarts to apply settings) they're telling the truth. This is not an "option", it's a requirement.

If I go back to the "Statistic" or "Bandwidth Monitor" tabs, they're now disabled:

image.4.png

image.5.png

I'm considering returning this router for one that won't try to spy on me. There is NO reason for this kind of thing in my home router, a device which should be prioritizing my own security and privacy. And certainly not for features like QoS or bandwidth usage monitoring.

Does anyone have recommendations? I want something that:

  • Has good network analytics so that when network issues occur, I can determine if it's due to one of my devices, or my ISP.
  • Good QoS (preferably one that can adjust to varying bandwidth availability throughout the day without me having to constantly toggle bandwidth caps)
  • Doesn't require consenting to third-party data collection.
2023-01-20 09:38:04 -0800

I do not trust myself to write software without some form of type checking. And I prefer more typing (ex: nullability, generics) when it is available.

This comes from a long history of realizing just how many errors end up coming down to type errors -- some piece of data was in a state I didn't account for or expect, because no type system was present to document and enforce its proper states.

Relatedly, I trust other programmers less when they say they do not need these tools to write good code. My immediate impression when someone says this is that they have more ego than self-awareness. In my opinion, it's obvious which of those makes for a better coworker, teammate, or co-contributor.

2023-01-13 18:31:38 -0800

Fixing your code before the weekend is like cleaning your house before you go on vacation. So much nicer to come back to. 😊

2023-01-04 10:44:57 -0800

Me: I dislike that the usual software engineer career path is to move into management. I just want to write cooode!

Also me: (leading standup today, being taskmaster, making sure we capture details into tickets, unblock people, shuffle priorities from Product Mgmt, volunteering to help other devs w/ something they're stuck on) I am actually quite good at this.

😑

YAGNI

2022-12-21 15:55:34 -0800

YAGNI. AIYAGNI,YWKWYNUYNI.

Not (Yet) Banned: FeoBlog

2022-12-18 14:56:16 -0800

So Twitter came out with a great new feature today: You're not allowed to link to other social media web sites.

What is a violation of this policy?

At both the Tweet level and the account level, we will remove any free promotion of prohibited 3rd-party social media platforms, such as linking out (i.e. using URLs) to any of the below platforms on Twitter, or providing your handle without a URL:

  • Prohibited platforms:
    • Facebook, Instagram, Mastodon, Truth Social, Tribel, Post and Nostr
    • 3rd-party social media link aggregators such as linktr.ee, lnk.bio

It's a laughable attempt to stop the bleeding of people fleeing to other social networks, and it's going to Streisand Effect itself into the (figurative) Internet Hall of Fame. Most of the point of Twitter for many is finding and posting links to interesting stuff online.

What's next, a ban on "free promotion of prohibited 3rd-party news sources" that point out what a ridiculous policy this is? (Though, I suppose that's not far from what they're already doing -- banning reporters who unfavorably cover Musk.)

FeoBlog is not yet banned, of course, because it's not on anyone's radar. What can I do to get some more users and get it noticed?

If you want to give it a try, it's open source software, so you can download it and run your own server. Or, if you don't want to bother with all that, ping me and I'll get you set up with a free "account" on my server. :)

AWS Lambdas: WTF

2022-12-15 18:14:27 -0800

I've used AWS's SQS at several companies now. In general, it's a pretty reliable and performant message queue.

Previously, I'd used SQS queues in application code. A typical application asks for 1-10 messages from the SQS API, receives the messages, processes them, and marks them as completed, which removes them from the queue. If the application fails to do so within some timeout, it's assumed that the application has crashed/rebooted/etc, and the messages go back onto the queue, to be later fetched by some other instance of the application.

To avoid infinite loops (say, if you've got a message that is actually causing your app to crash, or otherwise can't be properly processed), each message has a "receive count" property associated with it. Each time the message is fetched from the queue, its receive count is incremented. If a message is not processed by the time the "maximum receive count" is reached, instead of going back onto the queue, it gets moved into a separate "dead-letter queue" (DLQ) which holds all such messages so they can be inspected and resolved (usually manually, by a human who got alerted about the problem).

That generally works so well that today we were quite surprised to find that some messages were ending up in our DLQs despite the fact that the code we had written to handle said messages was not showing any errors or log messages about them. After finally pulling in multiple other developers to investigate, one of them finally gave us the answer, and it came down to the fact that we're using Lambdas as our message processor.

So here's the issue, which you'll run into if:

  • you use a lambda function to process SQS messages
  • you set a reserved concurrency to limit that lambda's concurrency

Whatever Amazon process feeds SQS messages into that lambda will fetch too many messages. (I'm not sure if there's a way to tell if it was in a large batch, or lots of individual fetches in parallel, but either way the result is the same.)

Every time it does this, it increments the messages' receive counts. And of course when they reach their max receive count, they go to the DLQ, without your code ever having seen them.

This happens outside of your control and unbeknownst to you. So when you get around to investigating your DLQ you'll be scratching your head trying to figure out why messages are in there. And there's no configuration you can change that fixes it. Even if you set the SQS batch size for the lambda to 1.

If you think you might be running into this problem, check two key stats in the AWS console: the "throttle" for the lambda, and the DLQ queue size. If you see a lambda that suddenly gets very throttled which correlates with lots of messages ending up in your DLQ, but see no errors in your logs, this is likely your culprit.

It seems crazy that it works this way, and seemingly has for years. AWS's internal code is doing the wrong thing, and wasting developer hours across the globe. Ethically, there's also the question of whether you're getting billed for all of those erroneous message receives. But I'm mostly worried about having a bad system that is a pain in the ass to detect to work around.

Time Travel

2022-12-15 14:59:57 -0800

Me, minutes before a meeting: Just one more line. One more line of code.

(15 minutes later, seeing a clock): Dangit, I'm late for my meeting.

Habits

2022-12-14 13:05:27 -0800

Me: "Why do I put the cap back on my water bottle after every sip? This is annoying even to myself."

Also me: Knocks over the full bottle I just minutes before had placed between me and my keyboard and yet had somehow forgotten existed.

(Thankfully, the cap was on! 😆)