It's the holidays, and you know what that means. … I've had enough time off from programming professionally that now I want to do some of it for fun!
I previously wrote a tool that syncs statuses from Mastodon to my FeoBlog instance. I was thinking about making some updates to that, and then I remembered that I'd read that Threads got around to testing their federation support.
Maybe I can update my tool to also sync posts from Threads, then? Let's test that out and see how their interoperability is coming along. (Spoiler: So far, not great.)
Nushell's built-in support for HTTP requests, JSON, and structured data makes it pretty nice for doing this kind of experimentation, so that's what I'm using here. Let's start by fetching a "status" with the Mastodon REST API:
def getStatus [id: int, --server = "mastodon.social"] {
http get $"https://($server)/api/v1/statuses/($id)"
}
let status = getStatus 111656384454261788
$status | select uri created_at in_reply_to_id
This works, and gives us back (among other data):
╭────────────────┬──────────────────────────────────────────────────────────────────────╮
│ uri │ https://mastodon.social/users/Iamgroot11/statuses/111656384454261788 │
│ created_at │ 2023-12-28T05:26:57.877Z │
│ in_reply_to_id │ 111656368951064667 │
╰────────────────┴──────────────────────────────────────────────────────────────────────╯
So does this, even though that status is coming from a different server. (Yay, federation!)
let status2 = getStatus $status.in_reply_to_id
$status2 | select uri created_at in_reply_to_id
╭────────────────┬───────────────────────────────────────────────────────────────╮
│ uri │ https://spacey.space/users/kmccoy/statuses/111656368910605303 │
│ created_at │ 2023-12-28T05:23:00.000Z │
│ in_reply_to_id │ 111656361314256557 │
╰────────────────┴───────────────────────────────────────────────────────────────╯
We can use the ActivityPub API for retrieving objects from remote servers to confirm that the version we got from our server matches the one published by this user:
def getActivityStream [uri] {(
http get $uri
--headers [
accept
'application/ld+json; profile="https://www.w3.org/ns/activitystreams"'
]
| from json
)}
let remote_status2 = getActivityStream $status2.uri
$remote_status2 | select url published inReplyTo
╭───────────┬──────────────────────────────────────────────────────────────────╮
│ url │ https://spacey.space/@kmccoy/111656368910605303 │
│ published │ 2023-12-28T05:23:00Z │
│ inReplyTo │ https://www.threads.net/ap/users/mosseri/post/17928407810714224/ │
╰───────────┴──────────────────────────────────────────────────────────────────╯
But there's no such luck with Threads. We can fetch the status from mastodon.social
:
let status3 = getStatus $status2.in_reply_to_id
$status3 | select uri created_at in_reply_to_id
╭────────────────┬──────────────────────────────────────────────────────────────────╮
│ uri │ https://www.threads.net/ap/users/mosseri/post/17928407810714224/ │
│ created_at │ 2023-12-28T05:15:53.000Z │
│ in_reply_to_id │ │
╰────────────────┴──────────────────────────────────────────────────────────────────╯
But threads.net
seems to be misrepresenting that it has an ActivityPub (ActivityStream) object at that URL/URI:
getActivityStream $status3.uri
Error: nu::shell::network_failure
× Network failure
╭─[entry #182:1:1]
1 │ def getActivityStream [uri] {(
2 │ http get $uri
· ──┬─
· ╰── Requested file not found (404): "https://www.threads.net/ap/users/mosseri/post/17928407810714224/"
3 │ --headers [
╰────
I discovered that the URL (not URI) advertises that it is an "activity":
http get $status3.url | parse --regex '(<link .*?>)' | find -r activity | get capture0 | each { from xml }
╭──────┬──────────────────────────────────────────────────────────────┬────────────────╮
│ tag │ attributes │ content │
├──────┼──────────────────────────────────────────────────────────────┼────────────────┤
│ link │ ╭──────┬───────────────────────────────────────────────────╮ │ [list 0 items] │
│ │ │ href │ https://www.threads.net/@mosseri/post/C1YndCeuddr │ │ │
│ │ │ type │ application/activity+json │ │ │
│ │ ╰──────┴───────────────────────────────────────────────────╯ │ │
╰──────┴──────────────────────────────────────────────────────────────┴────────────────╯
… but that URL doesn't serve an Activity either. (It just ignores our Accept
header and gives back a Content-Type: text/html; charset="utf-8"
.)
So what we seem to have here is Threads doing juuuust enough work to shove ActivityPub messages into Mastodon. But it's certainly not yet supporting enough of the ActivityStream/ActivityPub API to validate against spoofing attacks, as the W3C docs recommend:
Servers SHOULD validate the content they receive to avoid content spoofing attacks. (A server should do something at least as robust as checking that the object appears as received at its origin, but mechanisms such as checking signatures would be better if available). No particular mechanism for verification is authoritatively specified by this document, [...]
Is Mastodon just accepting those objects from a peered server without any sort of validation that they match what that peer serves for that activity? That would allow Threads to inject ads into (or otherwise modify) statuses that it pushes into Mastodon.
Or maybe Threads is only responding to ActivityStream requests if they're coming from a peer server that has been explicitly granted access? That would let them "federate" with peers on their terms, while not letting us plebs peek into the walled garden of data.
I'll reservedly concede that this may just be the current unfinished state of Threads's support for ActivityPub/ActivityStreams. But let's wait and see how much they actually implement, and how interoperable it ends up being.
I'm starting to get a reputation as a bit of a Deno fanatic lately. But (if you haven't seen the title of this blog post) it might surprise you why I'm such a fan.
If you visit deno.com, the official documentation will tell you things like:
While all of those are great features, in my opinion the most underrated feature of Deno is that it's a great replacement for Bash (and Python/Ruby!) for most of your CLI scripting/automation needs. Here's why:
Bash is great for tossing a few CLI commands into a file and executing them, but the moment you reach for a variable or an if
statement, you should probably switch to a more modern programming language.
Bash is old and has accumulated a lot of quirks that not all programmers will be familiar with. Instead of removing the quirks, or warning about them, they're kept to ensure backward compatibility. But that doesn't make for a great programming language.
For example, a developer might write code roughly like:
if [ $x == 42 ]; then
echo "do something"
else
echo "do something else"
fi
Can you spot the problems?
$x
is undefined, the test expression will fail with an error.$x
is a string that includes spaces and/or a ]
character, the tester will likely return an error.$x
may silently return false positives for this match. (I leave crafting them as an exercise to the reader. Share your favorites!)These gotchas are even more dangerous when you're writing a script to manage files. Several versions of rm
now have built-in protections against accidentally running rm -rf /
because it is such a common mistake you can make in Bash and other shells when your variable expansion goes awry.
Do you need an array? As recently as a couple years ago (and possibly even still?) the default version of Bash on MacOS is old enough to not support them. If you write a bash script that uses arrays, you'll get different (wrong) behavior on MacOS.
Seriously, stop writing things in Bash!
My theory is that Bash scripts are the default because people want to just write a self-contained file to get a thing done quickly.
Previously, Python was what I would reach for once a task became unwieldy in Bash. But, in Python you might need to include a requirements.txt
to list any library dependencies you use. And if you depend on particular versions of libraries, you might need to set up a venv to make sure you don't conflict with the same libraries installed system-wide at different versions. Now your "single-file" script needs multiple files and install instructions.
But in Deno you can include versioned dependencies right in the file:
import { range } from "https://deno.land/x/better_iterators@v1.3.0/mod.ts"
for (const value of range({step: 3}).limit(10)) {
console.log(value)
}
There is no install step for executing this script. (Assuming your system already has Deno.) The first time you deno run example.ts
, Deno will automatically download and cache the remote dependencies.
You can even add a "shebang" to make the script executable directly on Linux/MacOS:
#!/usr/bin/env -S deno run
import ...
While Windows doesn't support shebang script files, the deno install command works on Windows/Linux/MacOS to install a user-local script wrapper that works everywhere.
Not only that, you can deno install
and deno run
scripts from any URL!
deno run https://deno.land/x/cliffy@v0.25.7/examples/ansi/color_themes.ts
Deno makes TypeScript a first-class language instead of an add-on, as it is in Node.js, so the file you write is strongly typed right out of the box. This can help detect many sorts of errors that Bash, Python, Ruby, and other scripting languages would let through the cracks.
By default, Deno leaves type checking to your IDE. (I recommend the Deno plugin for VSCode.) The theory is that you've probably written your script in an IDE, so by the time you deno run
it, it would be redundant to check it again. But, if you or your teammates prefer to code in plain text editors, you can get type-checking there as well by updating your shebang:
#!/usr/bin/env -S deno run --check
Now, Deno will perform a type check on a script before executing it. If the check fails, the script is never executed. This is much safer than getting half-way through a Bash or Python script and failing or running into undefined behavior because you typo'd a variable name, or had a syntax error.
Don't worry, the results of type checks are cached by Deno, so you will only pay the cost when the file is first run or modified.
While I have not been a fan of JavaScript in the past, Deno modernizes JavaScript/TypeScript development so much that I find myself very productive in it. It's replaced Bash and Python as my go-to scripting language. If you or your team are writing Bash scripts, I'd strongly recommend trying Deno instead!
I try to be pragmatic when it comes to programming languages. I've enjoyed learning a lot of programming languages over the years, and they all have varying benefits and downsides. Like a lot of topics in Computer Science, choosing a language is all about tradeoffs.
I've seen too many instances of people blaming the programming language they're using for some problem when in fact it's just that they misunderstood the problem, or didn't have a good grasp of how the language worked. I don't want to be That Guy.
However, I still really haven't understood the hype behind Go. Despite using it a few times over the years, I do not find writing Go code pleasant. Never have I thought "Wow, this is so much nicer in Go than [other language]." If asked, I can't think of a task I'd recommend writing in Go vs. several other languages. (In fact, part of my reason for finally compiling my issues with Go into a blog post is so I'll have a convenient place to point folks in the future w/o having to rehash everything.)
Before I get into what I don't like about the language, I'll give credit where credit is due for several features that I do like in Go.
Go doesn't suffer from "Function Coloring". All Go code runs in lightweight "goroutines", which Go automatically suspends when they're waiting on I/O. For simple functions, you don't have to worry about marking them as async
, or await
ing their results. You just write procedural code and get async performance "for free".
Defer is great. I love patterns and tools that let me get rid of unnecessary indentation in my code. Instead of something like:
let resource = openResource()
try {
let resource2 = openResource2()
try {
// etc.
} finally {
resource2.close()
}
} finally {
resource.close()
}
You get something like:
resource := openResource()
defer resource.Close()
resource2 := openResource2()
defer resource2.Close()
// etc.
The common pattern in other languages is to provide control flow that desugars into try/finally/close, but even that simplification still results in unnecessary indentation:
try (val fr = new FileReader(path); val fr2 = new FileReader(path2)) {
// (indented) etc.
}
I prefer flatter code, and defer
is great for that.
I've been hearing "[Prefer] composition over inheritance" since I was in university (many) years ago, but Go was the first language I learned that seemed to take it to heart. Go does not have classes, so there is no inheritance. But if you embed a struct into another struct, the Go compiler does all the work of composition for you. No need to write boilerplate delegation methods. Nice.
Now that we have the nice parts out of the way, I'll dig into the parts I have problems with. I'll start with a story about my experience with Go. Feel free to skip to the "TL;DR" section below for the summary.
Back in 2016, my team was maintaining a tool that needed to make thousand HTTP(S) requests several times a day. The tool had been written in Python, and as the number of requests grew, it was taking longer and longer to run. A teammate decided to take a stab at rewriting it in Go to see if we could get a performance increase. His initial tests looked promising, but we quickly ran into our first issues.
The initial implementation just queried a list of all URLs we needed to fetch, then created a goroutine for each one. Each goroutine would fetch data from the URL, then send the results to a channel to be collected and analyzed downstream. (IIRC this is a pattern lifted directly from the Tour of Go docs. Goroutines are cheap! Just make everything a goroutine! Woo!) Unfortunately, creating an unbounded number of goroutines both consumed an unbounded amount of memory and an unbounded amount of network traffic. We ended up getting less reliable results in Go due to an increase in timeouts and crashes.
Given the chance to help out with a new programming language, I joined the effort and we ended up finding that we had two bottlenecks: First, our DNS server seemed to have some maximum number of simultaneous requests it would reliably support. But also (possibly relatedly), we seemed to be overwhelming our network bandwidth/stack/quota when querying ALL THE THINGS at the same time.
I suggested we put some bounds on the parallelism. If I were working in Java, I'd reach for something like an ExecutorService, which is a very nice API for sending tasks to a thread pool, and receiving the results. We didn't find anything like that in Go. I guess the lack of generics meant that it wasn't easy for anyone to write a generic high-level library like that in Go. So instead, we wrote all the boilerplate channel management ourselves. Because we had two different worker pools to manage, we had to write it twice. And we had to use low-level synchronization tools like WaitGroups to manually manage resource.
Disillusioned by how gnarly a simple tool turned out, I did some searching to find out if Go had plans to add Generics. At the time, that was a vehement "No". Not only did the language implementors say it was unnecessary (despite having hard-coded generics-equivalents for things like append()
, make()
, channels, etc.), but the community seemed downright hostile to people asking about it.
At that point I'd already played with Rust enough to have a fair idea that such a thing was possible. In a weekend or two, I wrote a Rust library called Pipeliner, which handles all of the boilerplate of parallelism for you. Behind the scenes, it has a similar implementation to our Go implementation. It creates worker pools, passes data to them through channels, and collects all the results (fan-out/fan-in). Unlike the Go code, all that logic gets written and tested in a separate, generic library, leaving our tool to just contain our high-level business logic. Additionally, this was all implemented atop Rust's type-safe, null-safe, memory-safe IntoIterator interface. All of our application logic could be expressed more succinctly and safely, in roughly:
let results = load_urls()?
.with_threads(num_dns_threads, do_dns_lookup)
.with_threads(num_http_threads, do_http_queries);
for result in results {
// etc.
}
Recently, I interviewed with a company that wrote mostly/only Go. "No problem," I thought. "I'm pragmatic. Go can't be as bad as I remember. And it's got generics now!"
To brush up on my Go, and learn its generics, I decided to port Pipeliner "back" into Go. But I didn't get far into that task before I hit a road block: Go generics do not allow generic parameters on methods. This means you can't write something like:
type Mapper[T any] interface {
func Map[Output](mapFn func(T) Output) Mapper[Output]
}
Which means your library's users can't:
zs := xs.Map(to_y).Map(to_z)
This is due to a limitation in the way that interfaces are resolved in Go. It feels like a glaring hole in generics which other languages don't suffer from. "I'm pretty sure TypeScript has a better Generics implementation than this", I thought to myself. So I set off to write a TypeScript implementation to prove myself wrong. I failed.
IMO, good languages give library authors tools to write nice abstractions so that other developers have easy, safe tools to use, instead of having to rewrite/reinvent boilerplate all the time.
OK this post is already really long, so I'm just going to bullet-point some of my other complaints:
foo = append(foo, bar)
instead of just foo.push(bar)
?Foo
implement Bar
? Better check all its methods to see if they match Bar
's. You can't just declare Foo implements Bar
and have the compiler check it for you. (My favorite syntax for this is Rust's: impl Bar for Foo { … }
, which explicitly groups only the methods required for implementing a particular interface, so it's clear what each method is for.override
as such so that the compiler can check it for you.I've really been enjoying writing code in Deno. It does a great job of removing barriers to just writing some code. You can open up a text file, import some dependencies from URLs, and deno run
it to get stuff done really quickly.
One nice thing is that Deno will walk all the transitive dependencies of any of your code, download them, and cache them. So even if your single file actually stands on the shoulders of giant( dependency tree)s, you still get to just treat it as a single script file that you want to run.
You can deno run foo.ts
or deno install https://url.to/foo.ts
and everything is pretty painless. My favorite is that you can even deno compile foo.ts
to bundle up all of those transitive dependencies into a self-contained executable for folks who don't have/want Deno.
This doesn't work if you're writing something that needs access to static data files, though. The problem is that Deno's cache resolution mechanism only works for code files (.ts
, .js
, .tsx
, .jsx
and more recently, .json
). So if you want to include an index.html
or style.css
or image.jpg
, you're stuck with either reading it from disk or fetching it from the network.
If you fetch from disk, deno run <remoteUrl>
doesn't work, and if you fetch from the network, your application can't work in disconnected environments. (Not to mention the overhead of constantly re-fetching network resources every time your application needs them.)
In FeoBlog, I've been using the rust-embed crate, which works well. I was a bit surprised that I didn't find anything that was quite as easy to use in Deno. So I wrote it myself!
Deno Embedder follows a pattern I first saw in Fresh: You run a development server that automatically (re)generates code for you during development. Once you're finished changing things, you commit both your changes AND the generated code, and deploy that.
In Fresh's case, the generated code is (I think?) just the fresh.gen.ts file which contains metadata about all of the web site's routes, and their corresponding .tsx
files.
Deno Embedder instead will create a directory of .ts
files containing base64-encoded, (possibly) compressed copies of files from some other source directory. These .ts
files are perfectly cacheable by Deno, so will automatically get picked up by deno run
, deno install
, deno compile
, etc.
I'm enjoying using it for another personal project I'm working on. I really like the model of creating a single executable that contains all of its dependencies, and this makes it a lot easier. Let me know if you end up using it!
After a recommendation from coworkers, and reading/watching some reviews online, I decided to get a new router. I purchased the "ASUS Rapture GT-AXE11000 WiFi6E" router in particular for its nice network analytics and QoS features.
On unpacking and setting up said router, I'm disappointed to find that the features I purchased the router for require that I give network analytics data over to a third-party.
The last line of that popup says:
If you would like to disable sharing your information with Trend Micro through the above functions, please go to: Router web GUI > Advanced settings > Administration > Privacy
For a brief few seconds I was naive enough to think that the issue was just that this behavior was opt-out instead of opt-in. So I headed over to the Privacy settings to opt out.
However, please note that such features/functions may not work if you stop sharing your information with Trend Micro.
"May not work" my ass! If you withdraw consent it just disables the features entirely, and then tells you:
Please note that users are required to agree to share their information before using [the features that I bought this router for].
At least now (after a couple router restarts to apply settings) they're telling the truth. This is not an "option", it's a requirement.
If I go back to the "Statistic" or "Bandwidth Monitor" tabs, they're now disabled:
I'm considering returning this router for one that won't try to spy on me. There is NO reason for this kind of thing in my home router, a device which should be prioritizing my own security and privacy. And certainly not for features like QoS or bandwidth usage monitoring.
Does anyone have recommendations? I want something that:
I do not trust myself to write software without some form of type checking. And I prefer more typing (ex: nullability, generics) when it is available.
This comes from a long history of realizing just how many errors end up coming down to type errors -- some piece of data was in a state I didn't account for or expect, because no type system was present to document and enforce its proper states.
Relatedly, I trust other programmers less when they say they do not need these tools to write good code. My immediate impression when someone says this is that they have more ego than self-awareness. In my opinion, it's obvious which of those makes for a better coworker, teammate, or co-contributor.
Fixing your code before the weekend is like cleaning your house before you go on vacation. So much nicer to come back to. 😊
Me: I dislike that the usual software engineer career path is to move into management. I just want to write cooode!
Also me: (leading standup today, being taskmaster, making sure we capture details into tickets, unblock people, shuffle priorities from Product Mgmt, volunteering to help other devs w/ something they're stuck on) I am actually quite good at this.
😑
So Twitter came out with a great new feature today: You're not allowed to link to other social media web sites.
What is a violation of this policy?
At both the Tweet level and the account level, we will remove any free promotion of prohibited 3rd-party social media platforms, such as linking out (i.e. using URLs) to any of the below platforms on Twitter, or providing your handle without a URL:
- Prohibited platforms:
- Facebook, Instagram, Mastodon, Truth Social, Tribel, Post and Nostr
- 3rd-party social media link aggregators such as linktr.ee, lnk.bio
It's a laughable attempt to stop the bleeding of people fleeing to other social networks, and it's going to Streisand Effect itself into the (figurative) Internet Hall of Fame. Most of the point of Twitter for many is finding and posting links to interesting stuff online.
What's next, a ban on "free promotion of prohibited 3rd-party news sources" that point out what a ridiculous policy this is? (Though, I suppose that's not far from what they're already doing -- banning reporters who unfavorably cover Musk.)
FeoBlog is not yet banned, of course, because it's not on anyone's radar. What can I do to get some more users and get it noticed?
If you want to give it a try, it's open source software, so you can download it and run your own server. Or, if you don't want to bother with all that, ping me and I'll get you set up with a free "account" on my server. :)
I've used AWS's SQS at several companies now. In general, it's a pretty reliable and performant message queue.
Previously, I'd used SQS queues in application code. A typical application asks for 1-10 messages from the SQS API, receives the messages, processes them, and marks them as completed, which removes them from the queue. If the application fails to do so within some timeout, it's assumed that the application has crashed/rebooted/etc, and the messages go back onto the queue, to be later fetched by some other instance of the application.
To avoid infinite loops (say, if you've got a message that is actually causing your app to crash, or otherwise can't be properly processed), each message has a "receive count" property associated with it. Each time the message is fetched from the queue, its receive count is incremented. If a message is not processed by the time the "maximum receive count" is reached, instead of going back onto the queue, it gets moved into a separate "dead-letter queue" (DLQ) which holds all such messages so they can be inspected and resolved (usually manually, by a human who got alerted about the problem).
That generally works so well that today we were quite surprised to find that some messages were ending up in our DLQs despite the fact that the code we had written to handle said messages was not showing any errors or log messages about them. After finally pulling in multiple other developers to investigate, one of them finally gave us the answer, and it came down to the fact that we're using Lambdas as our message processor.
So here's the issue, which you'll run into if:
Whatever Amazon process feeds SQS messages into that lambda will fetch too many messages. (I'm not sure if there's a way to tell if it was in a large batch, or lots of individual fetches in parallel, but either way the result is the same.)
Every time it does this, it increments the messages' receive counts. And of course when they reach their max receive count, they go to the DLQ, without your code ever having seen them.
This happens outside of your control and unbeknownst to you. So when you get around to investigating your DLQ you'll be scratching your head trying to figure out why messages are in there. And there's no configuration you can change that fixes it. Even if you set the SQS batch size for the lambda to 1.
If you think you might be running into this problem, check two key stats in the AWS console: the "throttle" for the lambda, and the DLQ queue size. If you see a lambda that suddenly gets very throttled which correlates with lots of messages ending up in your DLQ, but see no errors in your logs, this is likely your culprit.
It seems crazy that it works this way, and seemingly has for years. AWS's internal code is doing the wrong thing, and wasting developer hours across the globe. Ethically, there's also the question of whether you're getting billed for all of those erroneous message receives. But I'm mostly worried about having a bad system that is a pain in the ass to detect to work around.
Me, minutes before a meeting: Just one more line. One more line of code.
(15 minutes later, seeing a clock): Dangit, I'm late for my meeting.
Me: "Why do I put the cap back on my water bottle after every sip? This is annoying even to myself."
Also me: Knocks over the full bottle I just minutes before had placed between me and my keyboard and yet had somehow forgotten existed.
(Thankfully, the cap was on! 😆)
For a while I'd been maintaining 2 versions of the FeoBlog TypeScript client:
But maintaining two codebases is not a great use of time. So now the Deno codebase is the canonical one, and I use DNT to translate that into a node module, which I then import into the FeoBlog UI, which you are probably using right now to read this post. :)
Is it weird that I'm starting to feel like having a phone number is not worth it?
First, I use actual phone conversations VERY rarely. If I'm home and want to have a voice conversation with someone, I usually use VoIP (usually: FaceTime Audio) because it has higher quality than cell phone calls. If I'm out and about and want to communicate meeting time/place with someone, I'm going to send (or expect) a text message. So there's the question about whether it's worthwhile continuing to pay for a service that I don't use.
But the real problem is that modern apps and online services use your phone number as if it's a unique ID. If you give some organization your phone number, they'll definitely use it to uniquely identify you. Possibly to third parties.
And, even if you don't give them your phone number directly, since apps can slurp your contact info from any of your friends' contact lists, they've still got it.
And if companies can store this data about you, that data can get hacked and leaked. "HaveIBeenPwned" recently added phone numbers to their search because it's become such a concern.
If you worry about giving out your Social Security number, you should probably worry just as much about giving out your phone number. To companies or your friends.
This doesn't even touch on the problem of spam/phishing/fraudulent calls, which is another real problem w/ the phone system.
So, despite having the same phone number since 1998, I'd love to get rid of mine. Unfortunately, I can't yet because so many systems (ex: banks, messaging apps) do use it to identify you.
Plus, imagine you give up (or just change) your phone number. Now your old number is available for re-use. If someone were to claim it, they could then use it to impersonate you on any systems that haven't been updated with your new (lack of) phone number.
I’m thankful for when the cat comes and gets me to come to bed, as if to say: “uh? Hey. I’m sleepy and I need some warm legs to curl up on. Can you get in bed already?” ❤️
So recently Elon has:
It sure is starting to seem like he paid a lot of money to delegitimize it as a communication platform.
Guess you can't get "cancelled" if people and bots are indistinguishable.
The weather finally got decently cold and we turned on the heat in the new house. Woke up at 3:15am broiling in my own bed. It turns out the previous owner had programmed the thermostat to go up to 75°F at some point in the night.
75!? I barely let the house get that warm during the summer! So I’m currently in the living room with the sliding door to the back patio cracked so I can cool off. 🥵
Am I weird in disliking inlay hints?
They're those little notes that your IDE can add to your code to show you what types things are, but they're not actually part of your source code. For an example, see TypeScript v4.4's documentation for inlay hints.
My opinion is that:
As an example, take this code:
function main() {
console.log(foo("Hello", "world"))
}
// Imagine this function is in some other file, so it's not on the same screen.
function foo(target: string, greeting: string) {
return `${greeting}, ${target}!`
}
If you're looking at just the call site, there's a non-obvious bug here because the foo()
function takes two arguments of the same type, and the author of main()
passed them to foo()
in the wrong order.
Inlay hints propose to help with the issue by showing you function parameter names inline at your call site, like this:
function main() {
console.log(foo(target: "Hello", greeting: "world"))
}
(target:
and greeting:
are added, and somehow highlighted to indicate that they're not code.)
Now it's more clear that you've got the arguments in the wrong order. But only if you're looking at the code in an IDE that's providing those inlay hints. If you're looking at just the raw source code (say, while doing code review, or spelunking through Git history), you don't see those hints. The developer is relying on extra features to make only their own workflow easier.
Without inlay hints, it's a bit more obvious that, hey, the ordering here can be ambiguous, I should make that more clear. Maybe we should make foo()
more user-friendly?
Lots of languages support named parameters for this reason. TypeScript/JavaScript don't have named parameters directly, but often end up approximating them with object passing:
function foo({target, greeting}: FooArgs) {
return `${greeting}, ${target}!`
}
interface FooArgs {
target: string
greeting: string
}
Now the call site is unambiguous without inlay hints:
foo({greeting: "Hello", target: "world"})
And, even better, our arguments can be in whatever order we want. (This syntax is even nicer in languages like Python or Kotlin that have built-in support for named parameters.)
The prime use of these kinds of hints is when you're forced to use some library that you didn't write that has a poor API. But IMO you're probably still better off writing your own shim that uses better types and/or named parameters to operate with that library, to save yourself the continued headache of dealing with it. Inline hints just let you pretend it's not a problem for just long enough to pass the buck to the next developers that have to read/modify the code.
I have a desktop gaming machine that runs Windows 11. It's not bad at games but it's so slow at things like, opening apps, opening settings, etc.
Is Windows 11 just this slow, or is something wrong?
It's so bad that I ran winsat formal
to see if my nvme "hard drive" was somehow misconfigured:
Results:
> Run Time 00:00:00.00
> Run Time 00:00:00.00
> CPU LZW Compression 1139.80 MB/s
> CPU AES256 Encryption 15057.26 MB/s
> CPU Vista Compression 2834.34 MB/s
> CPU SHA1 Hash 10656.56 MB/s
> Uniproc CPU LZW Compression 100.19 MB/s
> Uniproc CPU AES256 Encryption 986.78 MB/s
> Uniproc CPU Vista Compression 250.19 MB/s
> Uniproc CPU SHA1 Hash 774.01 MB/s
> Memory Performance 29614.11 MB/s
> Direct3D Batch Performance 42.00 F/s
> Direct3D Alpha Blend Performance 42.00 F/s
> Direct3D ALU Performance 42.00 F/s
> Direct3D Texture Load Performance 42.00 F/s
> Direct3D Batch Performance 42.00 F/s
> Direct3D Alpha Blend Performance 42.00 F/s
> Direct3D ALU Performance 42.00 F/s
> Direct3D Texture Load Performance 42.00 F/s
> Direct3D Geometry Performance 42.00 F/s
> Direct3D Geometry Performance 42.00 F/s
> Direct3D Constant Buffer Performance 42.00 F/s
> Video Memory Throughput 279385.00 MB/s
> Dshow Video Encode Time 0.00000 s
> Dshow Video Decode Time 0.00000 s
> Media Foundation Decode Time 0.00000 s
> Disk Sequential 64.0 Read 4159.90 MB/s 9.5
> Disk Random 16.0 Read 1007.15 MB/s 8.8
> Total Run Time 00:00:11.67
When it can read a Gagabyte per second when doing random access, I don't think the disk is the problem. The CPU is an "AMD Ryzen 7 3700X 8-Core Processor" at 3.59 GHz, which also shouldn't be a problem.
Anybody have tips beyond "LOL don't run Windows"?
Well, I started this morning fixing a minor bug in FeoBlog. But then the GitHub Action build failed which sent me down a day-long rabbit hole that ended up with me upgrading from ActixWeb v3 to v4.
It's a bit disappointing because Rust is in theory not supposed to break backward compatibility. But I guess some bits of their API leaked and then got used by libraries I was using.
Not really what I had planned for my Sunday but glad to be on newer versions of things, I guess? 😅
Me: You should remain professional and avoid burning bridges.
Facebook recruiter: Hi! Want to use ML to moderate virtual social spaces?
Me: On second thought, …
When Node became popular, I never understood the hype around server-side JavaScript, other than that it took what had before then been mostly client-side and making it usable on the server.
But the pitfalls of writing large systems on the server without type checking seemed too great. And I wasn't that fond of JavaScript at the time.
By the time I got around to playing with Node more seriously, TypeScript was a thing. In FeoBlog I wanted to write a browser-based client that would both be a nice UI and a great demo of the client/server capabilities of the system. I chose Svelte as my UI toolkit, and I very much enjoy the features it offers. However, bundling JavaScript for the browser is still a pain to get working. And if you ship everything as a Single-Page Application, you lose out on indexing, and old/underpowered browsers.
FeoBlog actually has remnants of an early server-side template system which it falls back on for that purpose, but you lose out on a lot of features, and it's lost parity with the new Svelte UI. It would be nice if I could write code once and have it render on the server OR the client.
So now I'm starting to see the appeal of server-side JS. But... I don't really want to run Node. Thankfully there's Deno, which I've already enjoyed writing some scripts for.
AND, there's a cool new web framework called Fresh. It's got the same super-fast dev cycle that I've enjoyed with Deno, and the result is code that can render things on the server OR client.
If you want to see a(n incidental) demo of Fresh, take a look at Deno Deploy: Crazy Fast Cloud Functions - Architecture Speedun, which is where I first discovered it.
Looking forward to see where this goes!
I do not have a kind view of anyone who brags about not voting. And anyone trying to convince you not to vote has motives you should definitely question.
… But watching the Democrats just roll over on every damned thing is really making it feel like a pointless ritual. Democracy Theater.
They're guaranteed the vote of anyone like me who is against what Republicans are doing, so won't throw away their vote on a further-left party. But as a result they keep moving right to try to pick up more "middle" voters.
Feeling a bit frustrated and hopeless about the future for the U.S.
Just to be clear, though: I'll still be voting.
Uhh, WaPo… is this an ad for Trump? "Inaction" as democracy "came under attack"? He was and is continuing to attack democracy by continuing to lie about the legitimacy of the election. He spoke at the rally that ended up invading the capitol while the election was being finalized! And told them to do it! WTF kind of reframing is this?
This is as bad as the bootlicking "shots were fired and someone died at an altercation involving police" trope.
I've been writing Java since before Generics and still ran into this landmine:
Coworker (reviewing my code): container.contains(null)
can throw a NullPointerException.
Me: I don't think so, the docs say:
Returns true if this collection contains the specified element. More formally, returns true if and only if this collection contains at least one element e such that (o==null ? e==null : o.equals(e)).
And this code works as I expect:
import java.util.*;
public class Contains {
public static void main(String[] args) {
// Interestingly, List.of requires all its elements be non-null. Weird.
// var list1 = List.of("foo", "bar", "baz");
// var list2 = List.of("foo", "bar", null);
var list1 = makeCollection("foo", "bar", "baz");
var list2 = makeCollection("foo", "bar", null);
check(list1);
check(list2);
}
private static Collection<String> makeCollection(String... args) {
// return Arrays.asList(args);
return new HashSet<String>(Arrays.asList(args));
}
private static void check (Collection<String> list) {
System.out.println(list.contains(null));
}
}
Coworker: read a bit further. Docs also say:
Throws […] NullPointerException - if the specified element is null and this collection does not permit null elements (optional)
… sure enough. In my case I'm actually using a Set.of(predefined, elements)
, and that particular implementation will throw
if passed a null
.
UGHHh. NULLS.
FWIW, Kotlin handles this much more nicely:
fun main() {
val c1 = setOf("foo", "bar")
val c2 = setOf("foo", null)
val value: String? = null
println(c1.contains(value))
println(c2.contains(value))
}
… though you can only depend on that sane behavior when using its setOf()
constructor. If you might ever be passed a non-null-safe Java Collection you're back to needing to protect yourself against NPEs.
So the Supreme Court is going to overturn Roe v. Wade and basically let religious fundamentalists control women's bodies.
I feel the need to write something about it.
But then the next thought is: Oh, I should take some time, really organize my thoughts, find links to sources, etc, etc. That way lies me never writing anything. "Perfect is the enemy of good [enough]", etc.
So instead, here's my stream-of-thought braindump.
First, this is terrible. It's terrible for women. Especially in states that want to ban abortion. (And even with RvW many had already effectively banned it by making it practically unavailable.) Especially poor women who don't have the means and connections to leave for more liberal states.
If the court is saying there is no right to privacy, next you'll have states start outlawing contraception.
Then they'll pass laws saying it's illegal to travel to another state to get an abortion. (I think some already exist for minors?)
Without a right to privacy, anti-sodomy laws are back on the table, and also gay marriage bans.
Without a right to privacy, the government can regulate all sorts of personal details about your life with … what limits?
The permissibility of abortion, and the limitations, upon it, are to be resolved like most important questions in our democracy: by citizens trying to persuade one another and then voting.
Yeah, because that worked out so well for slavery, and interracial marriage, and segregation, and gay marriage. And, oh, what's that? Abortion.
It's just such a big "fuck you". "If you want rights, you should merely convince the majority to stop taking them from you."
And, as I've seen others point out, that's a double "fuck you" in the context of the court recently gutting the Voting Rights Act and states ramping up voter suppression and gerrymandering.
The political process is broken and I don't see it getting better any time soon.
$OurProduct is a love letter to $audience. ❤️
Look, if you're describing something as a "love letter" and then charging money for it, that's "solicitation".
Saw this one while out yesterday. Anybody know what kind of flower it is?
Uh-oh. "Svelte" has taken over as the language with the most lines of code in FeoBlog.
It's funny. I started FeoBlog because I wanted the data structure to be the way that distributed social networks work. But in order to make using that appealing, you've got to have a nice UI. And it turns out there's a lot involved in working toward one of those. Who knew?
So sounds like Twitter's getting bought in a hostile takeover.
I've been working on the next version of FeoBlog and I had a couple more features that I wanted to sneak in, but I should just release what I've got. (Agile! (lulz))
If you want to help me test it, or just want to play around with an open, distributed platform let me know!
I've been working on the next version of FeoBlog quite a bit lately. It's been fun!
One of the new features will be allowing FeoBlog to remember your private key for you, since working with them can be a bit cumbersome. I enjoyed this little experimental UI for letting users configure that behavior based on their preferred security level:
However, it ended up being a bit cumbersome to use in practice, so I'm going to change it to instead give you all the options, and then details about the security implications of the options you set. That way, it's less about shaming the user to choose the higher security level, and more about letting them configure it how they want and informing them of the consequences.
Another of the new features is Windows support! I switched to using ESBuild instead of Snowpack. Not only is it able to properly bundle, on Windows, I think it's actually faster as well. AND I found a plugin that lets me write my web worker as a module and inline it within the app bundle. 🎉 Definitely would recommend trying it if you're deploying JS to the browser.
Since about v0.5.0, I've been using FeoBlog as my own sort of RSS reader. I've got a few scripts that read Twitter, Mastodon, and some RSS feeds into FeoBlog for me, and then I just view my "My Feed" page and there's everything in one convenient place.
One surprising benefit of this is that I actually feel less of an urge to keep on top of things. The feed isn't going to get reordered by some unknown algorithm. There's no little "unread" counter telling me how many more I have to read until I'm "caught up". Plus, those posts aren't going to go away, I'll always be able to find them in the feed history. (Though it could be easier.)
So, generally a more healthy relationship with social media. Which is to say: I'm reading a bit less than before.
I’m flying home to San Diego today. Trying to check in to my American Airlines flight, they’re telling me I can’t carry on my bag, and offering to charge me to check it.
But their web site says:
… sooo which is it? I guess I’ll check in at the counter and see. Hopefully the lines aren’t bad. 🤞
https://github.com/NfNitLoop/feoblog/releases/tag/v0.6.0
Support for the Open Graph Protocol.
Now when you share links to other web sites, (or Discord) they'll be able to generate previews if they support OGP.
Quick access to share links.
Click the arrow at the top-right corner of a post to access share links.
db prune
to remove data that's no longer being used.
db usage
to see who's hogging all your disk space.
(See also: The tablestream crate I created to help with this output.)
So, working on FeoBlog, I wanted to print some data into a table in a terminal, and I was picky about how I wanted to do so, so I wrote my own.
In particular, I wanted to be able to:
The existing ones I found on crates.io required holding the table in memory.
So I just wrote my own:
There's now an official Discord server for FeoBlog. If you have questions, feedback, or just want to chat, drop on by!
Note: Discord invite links can expire, so check this user's profile for the latest link if the above one doesn't work.
Released: July 18, 2021
https://github.com/NfNitLoop/feoblog/releases/tag/v0.5.0
You can now filter and search your "My Feed" page.
Is someone posting a bit too much today? You can temporarily hide them from
your feed to see what everyone else has to say. Looking for a post you saw
last week? Now you can search for a keyword and view only posts/comments that
mention that.
Posts are no longer clickable.
Previously, the entire block containing a post was clickable, and would take
you to the page for that post. But that resulted in a lot of accidental
clicks. Also, since the cursor changed to a pointer for the whole block, it
was difficult to see if images were clickable. Now that behavior is gone. You
can click on the timestamp of a post to go to a page for just that post.
#52 Automatically redirect to the "My Feed" page when logged in.
If you're logged in, you're probably repeatedly coming to FeoBlog to check
your feed. So that's now the default view.
Whew. Have been working on some nice FeoBlog changes that I'll probably release this weekend.
I should probably make another video. The UI is looking much better now than the one I showed in v0.1. Plus, I've learned a couple things about video capture since then.
But for now, shower and bed. 😴
Now available here! https://deno.land/x/feotweet@v0.2.0
It adds support for syncing a single user's tweets to FeoBlog, as well as copying tweet attachments into FeoBlog.
Developing in Deno is still pretty fun. Though I did spend a couple days scratching my head due to this bug.
Apparently HTTP clients don't do well when you close the HTTP connection when they're still sending bytes at you, even if you've already sent a response.
The HTTP 1.1 spec isn't super clear on what should happen in this case. For example, it says this about closing connections, but seems to imply it's only for idle connections:
A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress.
This means that clients, servers, and proxies MUST be able to recover from asynchronous close events.
And it says this about "Client Behavior if Server Prematurely Closes Connection":
If at any point an error status is received, the client
SHOULD NOT continue and
SHOULD close the connection if it has not completed sending the request message.
... but that's only in the case of an "error" status, not an OK status.
Chrome handled this case by ... pausing for about 5 seconds, then continuing without error. (!?) And Deno handled this case by ... well, in the case I reproduced in that bug report, the next call to fetch()
would fail, but during debugging I saw other sorts of odd failures. Like calling .read()
on a Deno.Reader
that seemed completely unrelated to the HTTP connection would fail and say the rid
(Deno "resource ID") was invalid. Yeah, that one had me confused for a while. I wasn't able to reproduce that one in a minimal example, though.
I was able to work around the issue by just waiting for all the bytes to be sent before sending an HTTP response. But this seems like a thing that people could use to DoS your server. If you try to be nice and read all these unnecessary bytes they sent to you they can just do it forever. Though I guess there are countless other ways to DoS an HTTP server in addition to this, so what's one more?
One of the first questions I got when I told folks about FeoBlog was "Does it support ActivityPub?" (i.e.: "Does it interoperate with Mastodon?")
And while there are some reasons why it can't interoperate directly with ActivityPub, it's always possible to sync data back and forth.
So, I wrote feomasto, a little tool to sync my Mastodon feed into FeoBlog. You can see the results here.
Some design decisions I'm interested to hear feedback on:
Since FeoBlog is all public, I decided to only sync "public" and "unlisted" posts from Mastodon, so as not to expose any private posts from users I follow. (i.e.: not "private" or "direct" posts.)
So as not to unnecessarily use bandwidth of (often for-free) Mastodon servers, I decided to not inline images. Though, I could see myself coming back and deciding to add them as FeoBlog attachments, which would allow for inlining them without touching Mastodon servers. (Beyond the first download.) For now, I just link to the attachments so you can click in and view them as you wish.
Let me know what you think!
I'm also still really enjoying writing stuff with Deno. Not having to deal with package.json
, tsconfig.json
, npm
, etc, just removes a lot of friction. Plus, it's super easy to write, run, and share a standalone script that imports dependencies from online. (Though, I haven't really used the new module stuff in Node, so maybe you can do that there too these days?)
Had fun writing code for Deno! :D
https://deno.land/x/rss2fb@v0.1.1
Just published my first #Deno package. It's pretty cool!
One unexpectedly nice thing -- if there are errors in code hosted on https://deno.land, the error message contains URLs that take you directly to the line(s) of code in a web page. ❤
https://twitter.com/NfNitLoop/status/1411547380302839811
It's a pretty fun and easy way to write and distribute some TypeScript!
I want to write more, but I was "done in 20 minutes" a couple hours ago, and I need to clean the apartment. We're heading out of town tomorrow to see some friends for the 4th of July. So I don't get to play with code again until we're back on Monday afternoon. 😢😛
Released: June 25, 2021 https://github.com/NfNitLoop/feoblog/releases/tag/v0.4.0
The web client is now the default view.
FeoBlog has two ways to access content. One is plain HTML (A.K.A.: Web 1.0),
which works well for old browsers and search engines. The other is a web
client (Web 2.0), which has a nicer interface. Now, if you visit a page in a
browser that supports JavaScript, you'll get automatically redirected to the
newer, nicer web client.
Post drafts are now saved.
If you navigate away from the "New Post" page and come back later, your post
will still be there. Whew!
Added some helpful warnings when writing markdown posts
Now if you forget to link that [reference]
, you'll get a warning reminding
you to add a link.
Better support for password managers
You should save your private key ("password") in a password manager. But some
password managers were filling in the wrong fields. Hopefully that's fixed.
(If not, please open an issue!)
An updated README to explain the core principles behind FeoBlog's design
Support for attachments on iOS (and probably Android)
Oops. You can't easily drag-and-drop on a phone, so I added a button to
attach files. Now you can take photos and easily upload them from your phone!
Improved automatic link generation when adding attachments
When you add an attachment to a post, FeoBlog will generate a [link]
and a
[link]: files/reference.example
for you. Now it'll do a better job of
placing those within an existing document.
re:
(Twitter)
I assume that's in response to this line in the blog post:
- No more societal and political discussions on our company Basecamp account.
This is such a tone-deaf "got my privilege blinders on" rule.
What counts as "political" discussion?
I've worked with people who I'm sure would claim it's "political" that I casually mention my husband in work chat, since they're "politically" (and likely religiously) against same-sex marriage.
Is openly mourning the death of yet another black person killed by a cop "political" because "black lives matter" is somehow a sentiment that needs to be "both sides"-'d politically?
What about saying that you're for equal pay for women, and compensation transparency?
I'm not saying work chat should have a #politics
channel or invite irrelevant quarrels, but when you're in chat with folks 8+ hours a day (especially when you're remote), "real life" stuff creeps into chat. It's impossible to separate politics and real life in general, but it's doubly impossible when your existence and lived experience is politicized.
I suspect what counts as "political" will just be whatever makes trouble for The Company, or makes the leaders uncomfortable. They don't want to take a side because that's hard, so they'd rather just pretend the problem doesn't exist. And who will bear the brunt of that rule? People who speak up about issues.
And that leads me to the thing that always makes me confused at this kind of thinking, something that feels like cognitive dissonance when I come across it at a company: You can't be for "diversity and inclusion" and against talking about "political issues". Your workplace is not inclusive if some of your coworkers need to censor themselves, and others get to be "the default". And your workplace will not be as diverse once those employees find other, better places to work.
But, maybe Basecamp isn't for diversity and inclusion? Seems like their "Diversity, Equity, and Inclusion" working group is getting axed:
- No more committees. For nearly all of our 21 year existence, we were proudly committee-free. [...] But recently, a few sprung up. No longer. We're turning things back over to the person (or people) who were distinctly hired to make those decisions. The responsibility for DEI work returns to Andrea, our head of People Ops.
I'd love to hear from members of that working group how they feel about these changes.
My ideal workplace would be one that actually sticks to and implements its stated values. If you value DEI, then it seems like the issue isn't "political" discussion, it's sexist, racist, homophobic, transphobic, etc., statements. And, hey, some of those are already illegal in the workplace so you're probably already enforcing bans on that kind of speech. If you've got people in your company who are angry because they don't get to be the "other side" of those "political" issues, I'd suggest you've found the "divisive" problem in your company culture.
We ended up buying the e-bikes that I was thinking about. They've been great! Definitely happy to have a motor to help me up some of the steep hills in this neighborhood. Even with the electric motor at full, and on the lowest gear, some of them are quite tough, so I definitely wouldn't be managing this on a non-electric bike.
After a few trips out, my husband found a nice circular route for us to ride which has been our de facto route recently. It starts with a climb up a into some hilly neighborhoods, and just when I'm getting exhausted it ends with a nice downhill coast most of the way back home.
We got a little bit of rain last month so the hillsides are looking nice and green. Here's one of my favorite views from that route:
A few months ago, I was perusing the YouTubes, as one does while social distancing during a pandemic, and it served up a video review of an e-bike. It's been a while since I owned a bicycle, so the tech was new to me, and I watched enough videos that now YouTube thinks I'm an e-bike fanatic or something.
So after months of watching videos I thought, hey, I should try riding one of those things. So, today Heiðar and I went out to test ride a few.
At the first stop, I tried out a Trek Verve+ 2, and (I think) the Trek Verve+ 3. They were both quite nice! They've got a "mid-drive" electric motor, situated down between the pedals, that assists you as you pedal. Even if you turn it off completely, the bike rode quite nicely despite the extra weight.
At the second stop, we tried out a brand whose name I'll avoid mentioning. When their sales guy heard we'd tried a Trek earlier that day, he spent a lot of time telling us how bad it was. He claimed mid-drive motors are worse because they're harder to pedal when the assist is off. (I did not find this to be the case, and in fact, the bikes he was selling with hub drives in the wheels seemed harder to pedal with no assist.)
Still, he was nice and we each tested a few different models. I'm both out of practice and out of shape so we called it quits for the day, but I'm looking forward to trying some more later this week.
I'm trying to decide if the expense will be worth it. I'd like some activity to get me out of the house, and biking sounds fun. We're thinking e-bikes because the neighborhood we live in (and San Diego in general) is pretty hilly, so having something to help get up the steeper hills would be nice. But there's always the chance that after biking for a while the novelty will wear off and we'll have bought ourselves a couple expensive, electrified dust collectors.
This release brings file attachments to your posts, so bring on the cat pictures!
It also adds automatic builds and releases via [GitHub Actions], which is a nice thing for me. 😊
Released: Feb. 25, 2021
https://github.com/NfNitLoop/feoblog/releases/tag/v0.3.0
Note: There's a known issue (Bug #16) that is preventing Windows builds from working at the moment. I'll enable Windows builds when that's fixed.
FeoBlog v0.3.0 is out and it supports file attachments. Everyone knows The Internet is for pictures of your cat, so here we go.
Version 0.2.0 is out now.
feoblog db upgrade
command to keep your database up-to-date with the
latest versions of FeoBlog.And more. See all the details on GitHub
It's funny. I'm way more interested in writing FeoBlog than posting to it. Sometimes I feel my own words are boring or tedious, so I'd rather just read others'.
I've been having fun working on the next version of FeoBlog. It's going to have comments, as well as some other features I worked on along the way. (Faster loading of items. Relative timestamps. Some style updates. Uhh... other stuff I've forgotten. Don't worry, there'll be a changelog. 😛)
Curious how easy it is to write a client for FeoBlog?
Check out fb-rss.py, a utility to sync an RSS feed into FeoBlog.
A video demo of the features available in v0.1.0 is now available on YouTube.
If you're interested in learning more, but haven't had time to set up your own server, hopefully this will help!
Today I recorded a screencast demoing some of the features of FeoBlog. I'm excited to share it with the world, but first I had to brush up on my video editing skills. (... as if I had any to start with.)
If I were on a Mac, I'd probably have just used iMovie. But, since I recorded using OBS on my Windows PC, I thought I'd just edit the video there.
So Windows used to have an app called "Windows Movie Maker". Apparently that's just been folded into the Photos app in Windows 10. After I used OBS to "remux" my files to .mp4 files, Photos was able to edit them. Basic trimming worked, but when I went to export things, the resolution was limited to 1080p, though I'd recorded in 1440p.
I was a bit worried about things becoming blurry since some text was unfortunately already a bit small, so I looked for alternatives.
I came across a nice video on YouTube that recommended Shotcut, so I gave that a try. But at first it seemed unable to play my video files.
After some googling, I found that I needed to go to Settings → Display Method → OpenGL (instead of DirectX). That seemed to let it render my videos, but despite having really beefy hardware, things were still really sluggish.
I briefly tried out OpenShot, which has a much nicer site than Shotcut, but it had even more performance issues. (At one point, it took ~15 seconds to close after I'd clicked the close button, and all it was doing was playing back video clips I had put into the timeline.)
Back in Shotcut, I was able to edit things into a somewhat nice state. Tips for any beginners:
If playing video/audio becomes choppy, save the project and re-launch the application.
Save often. I found myself accidentally pressing keys that mapped to shortcuts I didn't know about. Having a restore point is handy.
Though the app tells you it'll auto-detect your video size & frame rate from the first video you attach, it does not seem to do so for 1440p. You'll need to add a custom video mode in Settings → Video Mode.
I'd neglected to do this, and my first export shrank my 1440p video to 1080p, then upscaled it back to 1440p since that was the export resolution I chose. Took me a while to figure out why just exporting my video had made it so blurry. 👎
Exports are still really slow, unfortunately. My 21.5 minute video is taking over an hour to export. It looks like a lot of operations may be CPU bound. I wonder if it would've been faster to just copy things over to my old MacBook Air and use iMove.
🐱 stares at my feet
"... yes?"
🐱 looks up at my face, then at my feet
Oh! Uh... is this the first time you've seen me in socks? 😆
So we adopted a cat about a month ago. We named him Giles. There are some pics on Instagram.
It's been nice seeing him come out of his shell. He was quite shy when we got him. Before we let him have free roam of the apartment, he would hide behind between the shower curtain and the front of the tub. Once he had access to the whole apartment, he lived under the bed until we started fishing him out from under there and having "no bedroom access" time.
Now he'll happily spend time with us in the living room or in my office, even when he has access to his hiding spot. But sudden movements do still send him running for cover. We can't scold him for anything because he seems to constantly think he's already in trouble. Poor thing!
The first publicly released version of FeoBlog, version 0.1.0 is now available on GitHub! 🎉
This is the first post on my public FeoBlog server. 🎉
Tomorrow, I'll work to get the source code cleaned up and published to GitHub so people can run their own servers. For today, ping me personally if you want to be an early tester. :)