Hci2

# The Way We Interact With Computers Sucks. #

What do you see when you sit down to work? My guess is a desk pressed against a wall, maybe a few shelves, one, maybe two monitors in front of you, and a keyboard and mouse dominating the work area of your desk. How do you access information? Do you simply Google something and if it’s not on the first page give up? How do you get reliable information when you have a question? How do you interpret results that you find? How do you store information you’ve collected? How do you filter information to get exactly what you’re looking for?

If you’re reading this, I’d wager that you’ve at least thought about this a little. Maybe you’re a Linux user, deep into the tiling window manager rabbit hole. Maybe you already have a fancy keyboard, monitors surrounding you on all sides. But I’m here to tell you that no matter how hard you’ve tried, what you’re using still probably sucks.

I think we, as individuals and as business, need to invest more in our work spaces, probably well beyond what most people would even consider. I’m not talking about adding a third monitor or giving everyone an artistic environment. I’m talking about setting up a work environment that’s conducive to productivity as instead of merely being the medium on which it takes place, the work environment should actively contribute to finding, accessing, retrieving/storing, consuming, and creating. - where data can be anything: art, documentation, code, whatever.

## HCI? #

Human Computer Interaction or HCI is an interesting topic to me. So many people spend a crazy amount of time in front of a screen , you’d think we’d have some damn good hardware and software to use while we further deepen the permanent butt-shaped indentation into our seats, but alas, instead most people use ˢʰᵘᵈᵈᵉʳ Windows.

Back in late 2018 I wrote the first version of this post and basically everything in there has been restated here but better. Since then I’ve talked to many others about this, read many other blog posts, and just generally done a lot of new things with computers and learned a lot, so here’s HCI2: Electric Boogaloo.

## Chapter 1: The Physical #

Computers aren’t just about software, websites, and programs. As the most powerful tool that most people have access to and often one that we spend many hours a day looking at, it makes sense that we make working with that tool comfortable, straight forward, and healthy. In general, we’ve mostly sucked at this, and while some things are getting better (monitor resolution) some things have gotten significantly worse (keyboards). Furthermore, the desks we sit at and the air we breathe while working is in need of attention too.

### Our Input Methods Suck. #

What the fuck is this shit?

Well, this shit is membrane keys. They use a lil' membrane of flexible plastic to make a button that when you press completes a circuit. They feel like mush and generally suck to type on. We can do much, much better:

Okay, cool, a mechanical keyboard. Now at least each switch is, well, mechanical. There’s a spring and actual feedback to your fingers and ears when you hit a key. But the keys are still arranged horrendously and it’s not at all fitting for human hands. So… What next?

Alright, so this is what I use. For me, this is great. It’s still attainable and usable by mere mortals without infinitely deep wallets(1) and using software as it exists today, but I think that’s largely still because it’s the furthest edge you can go from normal before things start being a royal pain in the ass, and don’t get me wrong. It’s not that there’s not some PITA incurred from using this weird of a keyboard. Switching to a traditional keyboard will always feel weird, other people can’t easily use your computer (not sure this is reallllly a downside…), configuration still basically requires you understand the basics of C programming, and some things that expect keys to be in certain places (games) will often be awkward.

But, that’s not where I want to go with this blog post. There’s plenty of people that have gone on for like 20 pages about how amazing QMK and the Ergodox are, I want to go deeper.

Let me start by lubing your brain up a little.

This is Dasher, a software keyboard using predictive text that should sort of blow your mind. Cool init'? Now, your thoughts probably went something like this

1. What the fuck am I looking at?
2. Oh damn that’s really cool, I want to try it.
3. Oh, hmm, but honestly even with tons of practice I can still probably type faster on a normal keyboard…

And Yep, that’s about the right conclusion. End of story, let’s all go home, blog post over.

But wait?

What if –>insert your preferred mechanical keyboard here<– wasn’t the best possible typing experience? What would something better look like?

Here is where our story really begins. You have to start asking some questions that sound like you just smoked a bowl, please feel free to read all of the following with the mental voice of a stoned dude saying “Like, man, " before each point:

• Why do we even need to type?
• What is it we want to input in the first place?
• What makes something good as an input device?

Effectively, Just keep asking Why’s and What’s until we’re at the core of the question.

So, let’s start at the top:

Q: Why do we even need to type?

A: We don’t. We can draw, dictate, or use any of a number of other methods. Typing is convenient because it can be kept semi-private, is tactile (assuming you’re keeb isn’t a 💩), and generally fast enough to keep up with the speed of thought if you know how to touch type.

Q: What do we want to input in the first place?

A: Text (in multiple languages), Links, Images, Diagrams, Code, Commands? Dates? Our wandering thoughts? Spur of the moment ideas? Everything. Keyboards happen to be a decent input device for some of these things, and tend to pretty much suck for others. (Come on, make art by typing in RGB values for every pixel. 𝐼 𝒹𝒶𝓇𝑒 𝓎𝑜𝓊)

Q: What makes something good as an input device?

A: Now I’m asking myself a hard question. It’s easy to list off good qualities of the familiar: Tactile, Responsive (low latency), Customizable, No αɯƙɯαɾԃ movements, keeps the users *hands and arms (hmmmm…) in a natural position, characters laid out well for the users language (and able to be switched live). But how do you get down to the core of this question without just listing traits of ‘goodness’ in existing things, what actual goals should we strive for. Should it be one unified device? (Hint: Touchscreens are great, but no.) What goals out weigh others? Is ergonomics more important than tactility? Can there even be a defined list that makes an input deice good?

No.

That’s why this topic is so interesting. My hands are not your hands. I play music, make art, write stories (and blog posts) and code. Part of the reason I got an Ergodone keyboard in the first place is because I was starting to experience some nasty hand cramps that were particularly bad if I was switching between guitar and typing a lot. I was willing to try just about anything, which I did. I switched to Dvorak(2), an alternative keyboard layout. That didn’t do the trick so I tried the ergodone (still using Dvorak, my layout is here) and haven’t looked back. But that’s left a few interesting points, ignoring the obvious reason of price, why are people still using something seems to be obviously worse - in a word: Familiarity.

Frankly, fuck that.

We can do so much better. Our phones have auto correct and limited text expansion, why do our beefier systems not do something a thousand times better with their superior on board resources? We could at least make it easier to grab text out of screenshots- But why are we not doing natural language processing so that I can verbally or textually describe to you a graph or math equation without needing to have committed to memory some archaic set of $$\LaTeX$$ symbol names (3)? Why are so many fields limited to ASCII, or maybe UTF-8, giving us those sweet, sweet emoji 🔥🔥🔥💯💯💯💯 when we could have something that allows text, diagrams, pictures, videos, etc.

Why are we limited to only buttons on our keyboards anyway? I have a BDN9 macro pad that has encoders which I can use to input keystrokes too, this lets me map knobs to functions that makes sense, like pageup/down, volume up/down, brush size in an art program, etc.

Also, if you’re the kind of person that needs to write long walls of text with minimal formatting, you might want to check out stenography:

### More than Just the Keyboard: #

#### Mice and Touch pads and track balls #

Take your hand off your phone screen or mouse for a second and hold your hand in front of your face. Wiggle you fingers, move your wrists. With that last instruction what did you do? Did you rotate your hand, move it up and down, or twist it? Now, consider how each of your fingers possess such fine motor controls. Is a mouse really made to take advantage of this?

What would be better? Clearly pointing with a mouse is actually already pretty good. If you just need more functions a gaming mouse with a plethora of buttons goes a long way, but I think that’s more of a stop-gap. What could we do better?

Well, There’s already the Leap Motion

The leap motion hardware is basically just two cameras without an infrared filter and some infrared leds.

The Leap Motion software does an actually really good job of using this information to reconstruct the finger positions in software.

and the Lexip 3D mouse - https://www.lexip.co

Note, This mouse, the Lexip pu94, is a complete disaster and the result of what is effectively a failed kickstarter. The windows drivers are broken, I’m currently working on a custom Linux driver for it but like, yeah. Don’t buy one.

However, using the Leap for everything would require you hold your hands out to point - something that anybody who ever owned a wii can tell you gets very tiring very quickly - and the Lexip PU94 would be far to awkward to use daily.

So, I honestly don’t know what the pointing device of the future looks like. Maybe it involves a mix of a mouse, finger tracking, eye control, and joysticks? The use of WiiMotes for projection mapping shows that there is room for using different devices for different kinds of input though. Maybe it’s more like the non-game uses the Kinect has gotten over it’s post-lifetime? I’m not really sure.

As for issues with current tech, mouse acceleration and touch pad responsiveness, and touch pad dead zone are all big problems and, like, I don’t understand how that’s a problem in 2021?

To give credit where credit is due, there have been minor changes that are trending positive, such as Logitech’s MX Master Line with the infinite scroll wheels, and a general trend for reducing latency and increasing customization options of higher-end mice.

Unfortunately, the drivers for configuring these options are still largely proprietary and anything but standardized, meaning making integration between brands and OS built-in support all but impossible. Of everything listed up to this point, I actually think this is the biggest problem. Without a consistent, extendable interface about the best that can be exposed is awkward hacks where joysticks are mapped as if they’re on a full game controller or keys just mapped to a macro of keyboard keys or existing but unused keys like the F13 though F24 keys or scroll lock. This is a massive problem. More on that in a bit.

#### Foot Controllers #

MorningStar MC6, a MIDI foot pedal that’s highly customizable and has inputs for connecting analog expression pedals (the green thing on the left). 10/10 recommended

Stinky Footboard - it’s effectively a 4-key mechanical keyboard for your feet. The driver for this product sucks, so I swapped the controller out for a promicro running QMK.

And, don’t get me wrong, both are great. Being able to use my feet to control my system really allows for a lot of flexibility; however, your feet can only do so fine-grained of control. Unlike keyboards where our fingers are great at hitting a bunch of individual keys, we’re better at hitting smaller buttons or controlling pedals (think pressure sensitive like a gas pedal) with our feet. So, with that in mind, you only get a limited number of inputs you can practically control, so those inputs really need to count. That’s the problem. They really don’t.

Even with all the power that using QMK gives and all the configuration options exposed in the MC6’s config editor, they lack one big thing: Context sensitivity.

Essentially, if my inputs are limited, there’s a limited number of solutions. The MC6 does offer multiple pages of inputs by stepping on two buttons at once to either page up or page down, but that’s not as good as just sending messages to the device to let it know that I’ve switched what I’m doing and that it should switch it’s active page/profile/whatever accordingly, the problem there is that has to be set up manually. With context sensitivity in theory devices could change the way they behave to be optimal for what you’re working on. Right now I have the Stinky Footboard set to control media playback (⏯️, ⏭️, ⏮️, 🔇) but that’s currently all it is set to, and that seem like a bit of a waste. Sure, I could set up multiple profiles, but without them being context aware that’s a bit of a moot point. And, alright, I think the original driver software for it as well as the software for most of my ‘gaming’ peripherals supports checking what the running program is, but that seems like a really bad solution and instead should be something the OS handles by letting the devices send generic button up/down events and doing ALL of the mapping in software.

The other relevant point would be the ability for inter-peripheral communication - basically, your mouse should be able to ‘talk’ to your keyboard and vis-versa. This becomes a bit redundant if as mentioned above all the mapping and meaning of buttons is defined in software, but the point would be that a key-combo could change your mouse’s DPI, or holding a button on your mouse could put your keyboard into a one-handed mode.

#### Pen Tablets #

Honestly, most pen tablets are reasonably good. Not all of them have great latency, map super well between the pen’s nibb and the actual pointer on the screen, and the majority don’t support touch, which probably isn’t ideal, but in general they do the job well enough, or, would, if you only counted the hardware.

Warning, the following is a rant about drivers on Windows: This is interesting as the hardware is actually reasonably competent, while the software is largely so incredibly god awful that it is somehow impressive. While I realize this is totally anecdotal, one pen tablet I’ve used on Windows the driver software is so bad that it actually randomly takes over as the focused application about once every 15 minutes, meaning whatever line you were in the middle of drawing just stops and you have to click on the program again to keep going. But, like, even with more competent driver stacks there’s like a 6 different options for pen pressure: Wintab, Windows Ink, the option to ‘Use the device as a mouse pointer’ … It’s so incredibly confusing and the required options per application vary wildly. To make matters worse, most of the drivers completely shit themselves if one display is scaled for HiDPI and another isn’t. On Linux, ironically, I’ve actually had very good luck with pen tablets.

But even then most don’t handle pressure in a way that’s customization in a good way, instead relying on software to do pressure-mapping, which just isn’t great. It’s often a serious pain in the ass to get it set 𝘫𝘶𝘴𝘵 𝘳𝘪𝘨𝘩𝘵 so that you don’t get crazy pressure jumps and even if the driver has in-driver calibration, you still usually have to tweak it more in the specific art/drawing/art application so now you have TWO pressure maps and it begins to feel like trying to balance a double pendulum.

I do still think there’s more room for improvement in the hardware too. I think Microsoft’s Surface Studio 2 actually had some really interesting and innovative ideas albeit it’s a weee bit on the extremely expensive side at 3,500 USD. I also think the HP Sprout did some really neat things too. Unfortunately, I highly suspect both will suffer from poor long term support.

I also think there’s room to allow for workflows that rely more on the physical, possibly something like Dynamic Land or even just the iskn Slate or Repaper (Note, I tried the slate and it really, really sucked. But the idea was interesting.)

#### Audio And Video #

Your webcam is shit. You know how I know? Because it’s a webcam. Even the everybody’s go-to, the Logitcech C920, is shit(8). You know what else is shit? Your microphone. When you type on a laptop it sounds like damn earthquake, and if you get comfortable and lean back in your chair you suddenly get quiet. But more than anything it sounds like I’m carry a call over cans on a string. Maybe you have a gamer headset. Cool, the boom mic sounds about as good somebody screaming through a cardboard tube.

But, better cameras do exist, and if you are willing to shell out the cash you can get a decent mic, but either way they have a problem.

The default settings are still shit.

On webcams you have auto-white balance, auto-focus, auto-gain, etc. and everything it tries to do is awful. On a laptop where it will legitimately be in a different setting regularly, that’s some-what forgivable, but on a desktop webcam? Like, the only variable here is if there’s a window letting light in. Make calibration easy and give me a white-balance slider. Please. Or just make better cameras and let the open source community make better drivers. I assure you there are plenty of rage-filled nerds willing to make your product not suck so hard.

#### Authentication and Authorization Suck #

Authentication is the sign in- the verifying you are who you say you are

Authorization is what the user and service can do, or what ‘permissions’ you grant the service and what you’re allowed to do on the service.

##### Authentication: #

I’ll bet cold, hard fictional cash that you’ve put off turning on two factor for a service you don’t give two shits about.

I’ll double my fictional money to bet that you have a junk password you use by default everywhere you don’t give a shit.

Hell, I’ll go all in betting on saying you’ve authenticated with quite a few services by just using the

button, though possibly only because it was the only way to login to that service at all.

Unfortunately, all of the above are probably not the best decisions.

The top two points combined mean someone can find your password in previous security breaches (haveibeenpwned), and then just login in, easy as that.

And look, I’m guilty of all of the above too. Sure, a password manager helps with this, but that’s still annoying too as sometimes you just need to quickly sign in on a device for a few minutes. As of right now there’s just no good solutions that are secure, easy to use everywhere, and fine gained enough to let the user give permissions as they want, and not give extra permissions they don’t want to.

Yeah. No. Well sorta. Okay, let me explain. Yes, you should use long passwords for exactly the reason this comic explains. But really, we need to stop using passwords outright. They just sorta suck.

Instead, we should go to Single Sign-On, like the above “Login with Social Media” example, but the user should be able to trust and change the identity provider with ease, to avoid the ‘Facebook banned me’ issue.

The real shitty part is a lot of services already support something like this, letting you setup sign in with Single Sign-On via your own identity server, but it’s usually limited to enterprise users.

That said, you can self host Single Sign-On (like the way ‘Sign in With Google’ works), keycloak, Dex, Gluu are options for doing this. It’s just that basically no online services will let you use your self hosted service without an enterprise account.

So, for now, users get fucked. There’s really not any good options.

That said, some things have gotten better, Webauthn provides a much better way to authenticate with many services, including some that can provide Single Sign-On identities, often via those little USB keys which are hugely better than the typical user name and password paradigm.

Note, those USB security keys have updated in standard. The newer ones which support WebAuthn are a bit more expensive and less common. The standards are confusing to understand, but the TLDR is you want something that supports WebAuthn/FIDO2.
The older standard, U2F, is a two factor system while the FIDO2/Webauth keys can outright replace passwords on the service

If you really want advice on what to use for your personal, daily password storage needs I think keepassxc is probably the best option at the moment, though it’s still a tad awkward.

For advice on security and privacy see the Security & Exploitation and Privacy pages.

##### Authorization: #

Put simply, we need easier to use, fine grain authorization settings that can’t be bypassed. I should be able to tell a program, website, or app that I don’t want to give it my location, and then, if it asks anyway it should be fed garbage. Refusal should also not stop access to that service(16). Similarly for storage, microphone, contacts, etc.

Newer versions of Android actually do this really well, including the ability to only grant those permissions for that session. This is amazing. It’s not perfect, far from it. Like, Bluetooth requires location permissions and, unless you’re on a rooted phone, there are some permissions the user can’t even give. That’s a load of shit, but I’ll come back to that.

#### Content Linkage sorta sucks #

Screenshot from the homepage of Obsidian.md

The digital world doesn’t have to be lonely pages indexed like a book, why are we treating it as such? Today each page can point to any other page in a beautiful web of interconnected information, where each topic has lines of association spanning such that no two pages are unconnected. Wikipedia sort of has the simplest form of this, but what if we had systems so capable of automatic understanding - not just tagging - of information that any new info could propagate though that web naturally. Social linkage to people in the same graph, even if anonymous, could help connect people that together, due to their very specific knowledge, drive man kind further. I should clarify to, I literally mean a web/graph, possibly in 3D, relating and indexing information, possibly like these 3D views of Wikipedia: (though the data should probably be served ‘raw’ so that other presentation methods can be developed, as this definitely wouldn’t be ideal for actually reading the content)

WikiGalaxy: Explore Wikipedia in 3D (wikiverse.io is very similar and worth checking out too)

Obviously this overly linked system is something that people would need to get used to. Until the advent of the WWW we’ve read information linearly, page by page. The web has allowed this tree traversal style of navigation so that any missed topic can be reviewed, but generally this is a system where the tree only builds down, to simpler information, from the current node. It seems weird to think about an algebra book where it suddenly references multidimensional calculus, but this is exactly what I’m implying. In my education there was uncountable times I had to learn something because ‘it will be used later’ with no explanation as to how or why. Linking back up the tree allows for information traversal in both directions, eliminating this problem.

### Presentation of Information Sucks #

This section is about how we view information, the presentation of words on a page, graphs, and information in general. To start, I want to look at something a little bit different.

This is the home page for a project called Gemini, which is a sort of alternative internet, but, wait, hang on…

These are both that same page, loaded in different browsers. Why do they look different?

Well, Gemini does something you may thing strange: It let’s the browser (client) handle the look of the page. It only serves the raw text. That’s it. That’s all you get.

Now, I do not think this is a good idea for the general web. But, I do think as an idea, it can give us some valuable insights.

Put a pin in it for now though, let me jump ahead into Dark Patterns:

#### ‘Dark Patterns’ #

A dark pattern is “a user interface that has been carefully crafted to trick users into doing things, such as buying overpriced insurance with their purchase or signing up for recurring bills”

That’s the definition from from Wikipedia anyway. I think it’s a bit better put as “Dark Patterns are what happens when UI designed are a bag of dicks”

This Site has a lot of really good info on Dark Patters, and I recommend heading over there and then coming back over here.

Oh, cool, you’re back.

The biggest dark pattern that drives me nuts is a bit of what that site calls ‘Confirm Shaming’ and a bit of ‘Misdirection’, I’m talking about sites that do this

where the design is actively pursing an agenda. But it's not just those. Even ones that look semi reasonable can still be annoying if they don't include the action directly. The affirmative action should be stated on the button that triggers it and both actions given equal weight to the user.

for example

Note here by Action I literally mean to include the verb. Delete. Replace. Print. Etc. Yes or No is not good enough.

With destructive or irreversible actions, such as deletion (not recycling), given a confirmation dialogue, and if particularly important, a dialogue that require meaningful user input, like this prompt when deleting a repo on GitHub

Alright, so back to Gemini: Making more things like it wouldn’t totally solve this problem, designers could still totally chose to make the text confirm shame, but by letting more elements be controlled by the user (or their browser), we could at least do a little bit better by letting that ensure options to confirm or deny are given equal weight and that they are colored appropriately.

It’s not like this system couldn’t be abused still of course, and there will always be a balance between the user trusting the service and it’s UI and the beauty of that UI, but I think we could stand to go a bit in the direction of Gemini.

“Global warming relies on the theory that we are destroying ecosystems. There is no evidence that we could destroy ecosystems.”

Rush Limbaugh, recipient of the Medal Of Freedom.

Yes. Misinformation online is a royal fucking shit show. There’s no way to possibly preserve total free speech, not that we should, and also have a system that doesn’t spread misinformation to the extent that people stop vaccinating their kids or thinking that COVID is a hoax. I’m not trying to address that problem. If I could, I would. But I honestly think that’s a genie we can’t really put back in the bottle now.

Instead, I think we could do some things to make it a bit harder to spread stupid, false information by making it a bit harder to present data in misleading ways. Sure, the data itself may still be bad, but, we can make an attempt to increase transparency and display data accurately. How? Well, first, go have a look at some comically bad graphs (Statistics How To).

A lot of these come down to graphs that purposefully play with axis or do other bullshit with the express intent of tricking you.

So, fuck their formatting. We should do it the Gemini way: Let the client handle the data display, and make the graphs interactive. If it’s a two bar chart with one bar at 54.5% and one bar at 55.0%, that should be what the user sees first and only then can zoom in.

This is already easy enough to do with something like https://d3js.org/, but it would need to be on the client side, and the server would just have to send the raw data + a preferred way to render it (bar chart or whatever), otherwise the problem is still there. Over time standards could grow to support more display formats. This would have the side effect of making it easier to author data and make web pages in the first place.

This also makes it easier to compare data sets, as now the client actually has access to the source data, or at least the data that drew the graph.

This practice could be incentivized too, as news, shopping, and review sites that use it could do so as a way to build trust with their users, and, probably more importantly for adoption, shit on their competitors that don’t do the same.

For those that still don’t, it might be possible to spin up a system with some machine learning to extract data from graphs in static images, and then re-display them as dynamic content.

This may not ensure the data is good, but at least it makes progress in increasing how we can trust data to some extent.

This could have extra uses too. Having something that could take two 2D graphs with a common axis and turn it into one 3d graph would be incredible, particularly if that data could come from multiple sources across multiple domains. Combine this with the ability to change the type of graph and this could help expose otherwise non obvious trends.

Beyond that, content moderation needs improvements too - I don’t even mean fake news or porn here (Though we could stand to have better nsfw tagging on most social media). I mean the bullshit reviews on Amazon or the fake products when shopping online (mostly fake electronics). If those services are going to be allowed to make stupid amounts of money, they should be required to do at least a tiny bit of consumer protection.

Yes, I see the irony in a post this long.

Bobby Mikul, Times Square :CC0 – Source

Information overload is increasingly becoming a problem. As more and more information is accessible at our fingertips and more advertisements have the opportunity to be beamed via any one of a number of surrounding screens directly into our retinas we need a way to filter it down to levels the human brain can cope with and digest.

This is a complicated subject, on one hand, it’s amazing to have an infinite wealth of information. On the other, an ever growing amount of that information is shit and irreverent, and eats away at our very limited mental processing time for the day, we can only ingest and actively pay attention to or throw out so much information, and when much of the information we seek to avoid (ads) are actively doing everything in their power to demand attention from our brains, be it with sex appeal, bright colors, or even humor, it’s an uphill battle. So what can we do? Well, a good start would be to legally limit advertising to be less distracting from normal content and make the advertising more easily distinguishable from the real content. But I don’t think that’s enough. I think if we’re going to make systems that have machine learning to get better and better at sucking our time we need to put in just as much effort to making design that promotes health and consumption in moderation.

An example of this is Netflix’s ‘Are you still watching?’ while this was implemented on their end to prevent unnecessary usage of data, it has the side effect of letting a user know they’ve been on the couch longer than should probably be advised. I’m not advocating for interruptions at every corner, just affirmative action by the user before bombardment with data. I do think as much data as possible should be linked to or aggregated, but force me to see more than what I request plus some surface level information. For something like YouTube this might mean playing a playlist is fine, but don’t start playing another ‘related’ video when that list is over.

Beyond that, keeping the design minimal but powerful. I think markdown is a great example of this. Users aren’t as dumb as people seem to think, we can, and do, learn the ways to make interaction with the things we use daily faster, so make the ‘speed limit’ as fast as it can be while lowering the need for menu diving and learning to do complex actions. Putting a frequently used option into a menu that needs to be clicked at all is much slower than assigning it a keyboard shortcut.

But, okay, back to information overload: The biggest problem is still that there’s just too much. I don’t really think there is a solution, maybe Banning Outdoor Ads like Brazil’s Largest City Did, would be a start. Maybe requiring that the Terms of Service for any service a user signs up to be a limited length and actually commendable would help. But I just don’t know. I think the world has just simply progressed to a point where FOMO has gone from a fear of missing out has gotten to a point where missing out is just a fact of life as 500 hours of content are uploaded to YouTube every minute.

What I do know is that trusting the YouTube or Facebook or Twitter algorithm to decide the content I see is incredibly dangerous, but that the alternative is overwhelming.

Meanwhile, legislation that has been passed to try to fix some of this often results in other issues, like all the ‘Can we give you cookies?’ prompts on websites: Why The Web Is Such A Mess (Youtube - Tom Scott)

#### Updates after Initial Draw #

Fuck your shit. Compute first, then display.

This may be the most irritating thing I encounter with modern computers. Our brains and bodies, as much as we may wish them to, don’t respond to stimuli right away. So, when I search for something, go to click something, and then while I’m moving the mouse to click the screen updates and a different link or icon is now in the spot I meant to click it’s really fucking annoying.

Window’s built in search, especially with web results on, is a huge offender on this but Google and other search engines do it too.

It’s not just search either, I’m sure everyone has encountered this in various places.

All you have to do is not change shit until you’re done computing the answer, and then only change it once. This is about as simple as it gets and it avoids magical re-arranging menus that make everyone lose their shit.

#### Everything needs to be more damn responsive #

Fuck your 𝒻𝒶𝓃𝒸𝓎 animations. I love eye candy, and a little bit is fine, but I shouldn’t have to wait as a menu slowly drops down with some pretty animation. If I’ve used that menu before I probably already know where I want to click, and now because I expect to be able to do so instantly I just clicked whatever is behind it. Fuck that. If the animation is masking some load time, sure, but as soon as it is loaded, quit it, and show the damn content. If the animation is necessary to avoid suddenly flipping from black to white and blinding users, again, I get it. But it doesn’t need to take more than 100ms.

If I have to spend more than a fraction of a second to process what is being shown to me is an ad, it should be fucking illegal. If you want to put ads mixed into the content, then it should be required to be a lot of a lot more visually obvious.

Original:

Edited:

Here the original at least has some color differentiation (Using the Boost app to view Reddit) but on the official Twitter client I actually have to stop and look to avoid accidentally clicking an ad link. That’s some bull shit.

Yunno what else is bullshit? The fact that all of these ads are ‘personalized’ to the point that collecting crazy amounts of information on individuals is expected and almost inevitable online, even with a pile of tracking blocking extensions and a DNS blackhole like pi-hole. This could, and should, be a rant of it’s own. Being spied on by our own devices is 100% not okay and it’s one of the biggest reasons that they way we interact with computers sucks.

### Storing Information Sucks #

Storing your data blows. Users have to contend with backups, backups for you backups, bitrot, file size vs compression, what file system to use, how to make backups actually convenient, mirroring information between systems with limited bandwidth, etc. But to start somewhere let’s look at archival:

#### Archival #

Digital archival on ‘cold storage’ sucks. For one, that cold storage is often either a PITA to attach in the first place, Usually using either using a slow USB interface, an expensive and far-from-universal thunderbolt one, or, if you want to go very bulk storage, requiring a specialized PCIe card which is meant for servers which brings along it’s own pile of issues.

But even once you have everything attached, most of the time backups are pain to run. You can always do the lazy copy-and-replace-existing method, but that’s painfully slow as it has to check all the current files instead of just doing the logical thing and comparing two indexes, but, of course, most file systems don’t support this index based method. Sure, there’s software to add it, like Bvckup, but most that I can find is paid or not something I would trust.

Using Git (or GitAnnex) is of course an option, but that has a higher barrier to entry to learn than seems reasonable. At the same time having actual file versioning needs to be a thing, something better than having untitled.docx, untitled.docx,untitled3final.docx, and untitled3.5.finaler.docx, even if it is still storing the file in full (though hopefully compressed) behind the scenes.

But, on the note of indexes, why are tools to provide a disk-offline index not better. From what I can find, catcli and Virtual Volumes View are the main two options, and both are bit out of the way to use, compared to just having it be natively in the file browser.

#### Phone ↔ PC is the fucking worst. #

MTP needs to die a very painful death. USB Mass Storage, that is, devices that show up the same way a flash drive does, are infinitely easier to work with. On Android, with large folders, I’ve found adbfs, a tool that lets you do file transfer over Android Debugging Bridge, to be much better than MTP, but, really? No ‘normal’ user should be expected to use that. Hell, a lot of people are just uploading files to the cloud and then downloading them on the target device because it’s easier. There’s also a growing number of apps that let users to transfers over wifi by hosting an Samba server on the phone, but why would something wireless be better? It’s absolutely crazy that things have gotten this bad.

#### We’re using ancient formats #

Look, jpeg and png are perfectly fine formats. For 2000. It’s 2020. HEIF (or BPG) really should be standard. Instead, it’s a motherfucker because M$is too damn cheap to include the HEVC extensions which it relies on it without either having the user pay$0.99 (or claiming to be the OEM) because a collection of jackasses have it patented so hard and require licensing fees such that it may as well not exist. HEIF/HEIC or BPG I think have a good chance because of the preexisting hardware acceleration, but other formats like hific, which uses GANs to do compression, look promising too.

As a note about why I wrote about HEIF/C in particular, most phones are capable of storing images in this format now, and IPhones do by default, which is a real PITA if an apple user emails these pictures to a Windows user.

Of course the same applies in other formats. .flac is replacing .wav for high end audio, but why not Direct Stream Digital (DSD)?

• All the best formats are a pain in the ass
• format shifting sucks, opening them sucks, patents suck
• People use some really, really shit formats
• A lot of formats are needlessly complicated and not human or computer readable to anyone but the software vendor

#### Bit rot? #

Data on the internet gets compressed, saved, recompressed, resaved, upscaled, re-colored, and deep-fried pretty quickly.

This combined with more traditional bit rot, where errors result in flipped bits, is a massive PITA.

Sure, tools like Waifu-2x help with the first problem, but using AI-up scaling to fix the loss of data isn’t ideal. For actual bit rot, tools exist to detect bit errors in most formats and you could always use a better file system that does check summing, but both of these require more technical skill than most people have.

While not exactly related, data accumulation and near-duplication (think having two pictures with one having just a 2px cropped off the top) is a big problem. Trying to sort though a mounting of images, text, or audio files can be nearly impossible if put off for too long, making good digital hygiene a must despite the fact that nobody ever tells anybody how to have good digital hygiene in the first place.

AI tools to tag and identify images and audio help, but those tools are still limited and often only work well on uncompressed data, so no .jpgs or .mp3s for you.

With all of this combined keeping your files in order, not corrupted, and not having duplicates becomes a growing issue.

#### Storage Hardware and File systems suck. #

The hardware issue is mostly a side effect of trying to market technical differences to people who ultimately just want a place to drop their files. A normal user shouldn’t have to know what all the various specs of a HDD or SSD mean to know what to buy.

That said, holy shit do manufactures suck at this. Everything from Western Digital redefining ‘RPM’ to Western Digital uses SMR on NAS drives, DRAM-less SSDs and Bait-and-switch in regards to SSD performance.

SMR, or Shielded Magnetic Recording, has a few issues that make it problematic for Network Attached Storage (NAS) systems using multiple disks, particularly if the NAS is running ZFS, a common file system made exactly for this use case.

But the issues go beyond that. While a bit controversial, I think literally any modern filesystem (BTRFS, ZFS, or even EXT4) is much better than the mess that is NTFS, yet, Microsoft only officially supports NTFS, FAT(/32), and ReFS- all of which sorta suck.

There is no fucking reason everyone - Microsoft, Apple, Linux, etc. - can’t fucking agree on something and avoid the massive fustercluck that is using FAT32, a filesystem that can only store files up to 4GB, as the only common system that “just works”.

Note, you can use BTRFS on Windows, using 3rd party tools. Technically, the same is true of EXT2/3/4 too, but I don’t trust it to not eat my data.

Ideally, we’d be using Logical Volume Managment so that the entire filesystem can have snap shots, partitions could be resized, or use multiple physical disks

I also don’t get how in 2021, some system are still booting off of spinning rust? Hell, why are we really using it at all. Yes, I know the price per GB is much lower, but we’re talking about something that is so sensitive to vibration that Shouting In The Datacenter is a problem. This is extra dumb when you consider a lot of computers or game consoles will be right next to speakers and subwoffers. Every time I pickup a laptop with an HDD and I can feel the inertia of the disk it makes me cringe.

#### Cloud Storage is a terrible idea #

To keep the core of this issue brief: The cloud is just someone else’s computer. You can never be certain of what they’ll do to your data.

You can’t be sure they won’t have some random DMCA complaint take something down.

You can’t be sure they won’t suddenly increase price and essentially hold continued access to your data at ransom

You can’t be sure they won’t mine your data for targeted advertising

You can’t be sure your data won’t accidentally be public because of bad security.

Just don't put your data in the cloud.

Character owned by Vega, art by Talon Creations and Vega

That said, I will admit two valid uses:

1. Collaborative Editing. GSuite is actually pretty cool.
2. Backup but only if the service is only backup and you already have at least an on site backup. For example, I think Backblaze is actually a pretty neat service and it seems like they do things reasonably. The ship you a hard drive option here is what makes it make sense to me. Note, I’ve never actually used Backblaze.

But 1. still has issues, especially if the format the collaborative documents are saved as are only valid on that cloud platform. Think like .docx for Word, but what does GSuite use? Can you be sure it’ll work offline?

I’d also like to mention the idea of distributed computation here as well, as It can be used for both the storage of and computation on data. I think that having a true distributed system in place, one where all users contribute compute and storage for it’s use, makes sense. The obvious ask is to get it to be self sufficient. This brings up the idea of balanced usage to contribution, I think the easiest solution is to simply use a system of computational debt tied to each user account. If the user is creating more computational debt than the average debt the system can sustain then that user should be handicapped in bandwidth accordingly. This does sort of bring us full circle in ‘can I just trade debt with someone, or sell them my computational time’ such that we’re back to crypto based services again though, and I really don’t like this idea for two reasons:

1. This system needs real time computation and bandwidth, and these vary in value just like how electricity peak hours cost more.
2. This incentives simply paying for compute time instead of actually contributing computational power to the network like it actually needs, which in turns creates an incentive for people to do this at scale annnndd look at that we’re back to centralization.

The biggest problem with this is that home Internet users very rarely have symmetric connections, so people would probably be very pissed off if their download speed were suddenly tied to their upload speed. This could be offset by building up credit, as previously mentioned, but that has the issues, as previously mentioned. I suppose there could simply be a credit cap, but setting that would be exceedingly awkward as a logical number to use would vary by user and how they use the system.

I do hope that someone has a better idea than me for the future of distributed computation, because I really can’t see any good solutions despite wanting it to be part of the future.

I’d also be remiss if I didn’t mention Boinc, a tool you can use to donate unused computational resources from when your computer would be otherwise idle to good causes such as Searching for Extraterestial Life or Folding protiens to look for cures to various diseases.

Unfortunately, in response to criticisms, like this one, of cloud storage a lot of providers of “Personal Cloud” devices have cropped up. Though, headlines like “I’m totally screwed.” WD My Book Live users wake up to find their data deleted and If you have a QNAP NAS, stop what you’re doing right now and install latest updates. Do it before Qlocker gets you might go to show why that’s also a pretty fucking stupid idea.

### Transferring Information Sucks #

I mostly mean networking, but things like flash drives too.

#### The Internet Sucks #

Well, okay, the internet is honestly fucking awesome. But some of it is designed horribly and some of it is nowhere near as good as it could be because of users making stupid choices.

To start with, let’s look at how horribly shit was designed. As a start, I recommend reading IPv6 Is a Total Nightmare — This is Why by Teknikal, it explains the issues with both IPv4 and IPv6 beautifully. There are other issues with the web, like the fact that neither DNS or IP were designed to be encrypted (and so private) by default, so instead we’ve had to patch on fixes like https. Of course, there network security problems are found regularly, such as Nat Slipstreaming, a nasty issue that made the rounds recently.

There are also issues of access. In the US at least, most places are part of an ISP regional monopoly. Often they’ll argue that you do have options, such as satellite internet. However, this is complete and total bull shit. You do technically have the option, sure, but this option is slow, usually has data caps, and just generally sucks. If you’re in a rural area, you’re lucky if the copper in the ground is still good enough to get you something fast enough to watch a YouTube video. Then, on top of all this BS, the ISPs regularly get caught for doing shit to your traffic, whether it be injecting ads, terminating connections early, blocking services (often torrenting), not letting you forward ports, etc. Oh, and then they try to charge you for a modem you don’t even have- thankfully that was just made illegal.

TLDR; ISPs are evil.

#### Centralization Sucks #

https://lbry.io/

https://datproject.org/

https://ipfs.io/

#### Transferring Your Profile Sucks #

AnIdiotOnTheNet’s Comment on This Hacker New’s Submission - ‘Re-Thinking the Desktop OS’

[…]

1. Switchable “user profiles” instead of “user accounts”, which are an artifact of giant shared computer systems. User profile just contains personalized settings and can be located anywhere, including removable media so you can take yours to other computers. If you want to keep things safe from others, encrypt them. Otherwise there are no permissions except those applied to applications themselves.

I think Solid, a project by Prof. Time Berners-Lee, the guy behind the World Wide Web, is a decent implementation of this if it were to gain enough traction to actually be used.

Solid’s central focus is to enable the discovery and sharing of information in a way that preserves privacy. A user stores personal data in “pods”, personal online data stores, hosted wherever the user desires. Applications that are authenticated by Solid are allowed to request data if the user has given the application permission.

But the point I’m trying to convey is that right now setting up a new device or logging into a service sorta blows. The user profile needs to be secure, user-owned, and decentralized. For those that know Linux, it’s what making your ~/.config folder into a git repo should be like.

#### Local Backups By Default #

Most web pages are reasonably small, especially if you ignore java script crap. Why do browsers not just backup all web pages we go to (on desktops and laptops where storage is a non-issue) ? This would provide the benefit of being able to do a local text search of everything browsed recently as well as backups in case the page goes offline or moves.

There are tools that do this already (like Archivebox) which can be automated but it’s still not user friendly to normies. There are also sites like Waybak Machine and Perma.cc that will save copies of websites for you and provide a link that should always work, even if the website goes down or the address changes, but again, this is a bit of a pain. It also can lead to copyright issues for these services.

#### Physical Interaction #

I also think the boundaries of physical and digital should be more blurred. I’d love if I could set a book on my desk and search though it for an idea or concept by mere image recognition of the cover, or if it’s an unknown book at least being able to digest any pages shown to it explicitly. Say a section was highlighted? It would be great if that were automatically added to a personal journal file of sorts for future reference, especially if related data were automatically associated with online sources, or links made to people who are interested in similar subjects.

The Screenless Office (Screenl.es) and Dynamic Land both show this idea pretty well.

### Creating New Information Sucks #

Or, People Will Only Make Stuff That Is As Good As The Tools They Have

Note the Will and not Can. A very talented musician can make a shovel sound good, a talented photographer doesn’t need a good camera. But in general that’s setting the required bar of talent - and therefore time - higher. The better and more efficient our tools are, the better content people can make without putting in more time than they’re willing to.

A better camera won’t make you a photographer, but it might be the difference between unusable pictures and a saved memory

I think I’ve generally made the case that our tools suck so far, but here’s where I think things can get really interesting.

[TODO]

• Faster input
• WYSIWYG sucks
• Needing to compile your views also sucks
• Tools need to scale in complexity with the user
• Start by showing an intro UI, let the user add more features to the UI as needed
• Great in application documentation
• Included examples
• on UI help and highlighting
• Program data type interopability
• Common in-progress formats for video editors, image editing, sound editing, etc.
• Variety of formats, hard to shift between
• mp4 or .gif?
• Copyright is a real pain

As far as how all of these tools should work and interact, well, I think they should all use standardized file formats, even if they have to abuse them a little to do so and that they should all have a common Application Programming Interface (API) for interaction. This would hopefully mean that any extension written for one program would work for another, and any program could talk to another. For example, currently the realm of music software has a little bit of this with VSTs and MIDI. but it still leaves a lot to be desired. I’d actually like to take it a step further though and ask that all data of any kind use a common enough format that it can be processed with any extension/program written with this API in mind. Imagine if you could use a synthesizer as a static generator for image manipulation, or color management controls as an EQ. Both would and should behave in strange way, and it’s this very lack of defined behavior that could lead to interesting art forms. I’d love to see a ‘Master’ API that works across all formats and ideas with a common data type that allows for program⟺program, program⟺extension, program⟺hardware, etc. communication even in long, complicated chains. After all, if you’ve taken a signals and systems course you know that basically any data can be treated as a signal.

This could be done with some sort of node based programming system. While I don’t actually know how to use it, I think Luna demonstrates this concept fairly well:

though there are plenty of other examples, like the node editors used for making shaders or programming in Unreal Engine

### Software that breaks the mold #

[TODO]

MasterPlan by SolarLune

Habitica?

https://www.craft.do

http://audulus.com

https://dag.s-ol.nu

Demos of the WIP Blockhead DAW:

### Hardware for Open Experimentation #

[TODO]

Microfluid computers, diode logic, GPIO

#### Wasting time on stupid shit that nobody cares about #

Microsoft has been spending a lot of time changing to the new UI, and their calculator has been updated like a dozen times for UI now but still SpeedCrunch remains 1000x more usable and tools like WolframAlpha remain superior yet. Stop spending time on shit literally nobody gives a shit about.

## Chapter 4: What points contradict? #

### Having Low Level Access and High Level Usability. #

Yeah, this is always a problem. It’s always been the dream to be able to describe in ‘natural language’ what you want and to have the computer parse what you want, inferring intent, and do whatever you want for you, but naturally, this will never be totally possible. This point is only conflicting in the sense that it can be overwhelming- if a user has access to work in something as high as natural language and can manipulate those instructions all the way down an the assembly level, that’s a lot of open space. Ideally, each layer of the abstraction would be open to tinker and modify for the sake of getting the solution to work correctly, to pipe data around at any level, or to add functionality in it’s most natural language: Some tasks are better suited to describing what’s needed in English, some are easier to do down in the dirt.

Making everything open this way may sound complicated, but if the UI were presented right it could just be a stack of abstraction that propagates up and down. Changing the assembly could change the source could change the natural language description. Better, the cost of this could be lowered if each layer is only shown and edit-able at request, and that layer just bypassed until needed. Of course, this would mean being able to to make a set of languages that can be can go from higher level to lower level yet have a middle language introduced mid-stack without changing the meaning. This is complicated. It’s like asking for a fast python interpreter that can be ran directly or spit out C, then have that C code be editable with it’s changes reflected back into the python code. I’m aware of how complicated of a problem that is. Add a natural language description above the python level in the above and it just got much, much more complicated. Still, I think this is something we should aim for.

### Latency/Speed vs Things That Are Inherently Heavy #

I’m asking for a lot of inclusion of AI/ML tech into the OS and day-to-day use, yet also asking for much, much faster response times in general. To some extent, hardware with dedicated silicon for AI/ML will make this better, but regardless, there’s no way around how much this conflicts. I think the only way to fix this is to recognize what latency is and isn’t acceptable.

As mentioned above in Presentation of Information -> Updates after Initial Draw, there are some things that are particularly egregious to the user from a UI timing perspective. Waiting on the computer sucks, sure, but having to babysit the computer while you wait on a prompt that could easily be given preemptively or make sure a task doesn’t time out is completely unacceptable.

But even just directly looking at speed and latency, there’s still a ton of room for improvement. Why does the root file system not retain at least an index of other file systems to let you browse while a HDD spins up or a network connection is established? Why do so many damn things have s u c h l o n g animations that have to complete before the user can continue? But most of all can we please stop building programs with electron or other things that are just full browsers for one program? Use literally anything else. The best way to lower latency is to use as little code as possible, good data structures, good libraries, and good tools. I said it contradicts to keep latency down when doing things that are heavy, but a lot can be done to make so much of what we use day to day substantially lighter to begin with with no loss in functionality. I understand why projects use Electron, but if you must please just use Flutter or Neutralino or Sciter or Ultralight even a game engine. Just, not something so heavy unless you need it? Please?

https://danluu.com/input-lag/

I’m writing this at @292.78 on Day 15 of 12,020. I’m typing on a Dvorak, split, ortholinear keyboard in a markdown document using Arch Linux instead of M$Word on Windows. And It’s fucking awesome. Thing is, nobody else can use my computer. Moreover, if anyone were forced to learn all these weird formats and behaviors instead of what they’re used to, they’d give up. What people are used to, that is the defaults make a huge difference. Defaults have a lot of power. There’s good reason that (for a while at least) Microsoft had to inform users about browser options. What comes ready to use and presented to the user from the start is much more likely to get used than something a user has to go out of their way to get. Similarly, the ability to even make choices in the first place matters a great deal. For example, on modern versions of Windows, you’re pretty much stuck with the stock shell (desktop+file manager) as alternatives are either pretty similar to what’s there already or mostly dead. The question then becomes what things matter to choose? I think ideally everything should be open enough to be replaced, but that doesn’t fix anything if options aren’t given. At the same time, Systems like Arch Linux will likely never have mainstream appeal exactly because none of these choices are made for the user. At the end of the day, most users want a system that just works. They don’t want to have to choose between a list of 5 different firewall providers, hundreds of desktop environments, and login managers, and shells, and so on. So, defaults have to be chosen. Fortunately, so long as people have the option to change things if they want to, they can approach a system that works well for them. For me, that’s Linux, i3wm, Dvorak, and a bunch of weird hardware. For a lot of people, it’s probably just exactly how Windows is now with a few minor tweaks. ### Everything is in the browser now anyway? # Above I said that all of this should be in the OS and not just browser extensions as people still use a lot of non-browser tools, and I think saying that was rather dismissive. It is definitely true that when using a computer today the vast majority of your use will probably be in-browser. The problem lies in that ‘vast majority part’ - that’s likely because in general people spend a lot more time consuming than creating, and the browser is built for media consumption. On the other hand, most creative software - be it for writing, art, video, music, etc. - is not browser based because, well, it sucks for that. Maybe that will change as WebAssembly makes things faster, but I don’t see it happening, so I think we still need full OS integration for it to really matter. The other point to make here is the browser probbbbabbllyy isn’t the best place to implement a lot of what I’ve mentioned so far given a lot of it is performance sensitive, works with private information, and relies on deeper tie in to the OS - something which for security sake the browser shouldn’t really be capable of. On the other hand, one of the things I mentioned repeatedly was portability. Browsers actually have this working pretty damn well at this point, syncing beautifully between devices compared to how things are on Windows or Linux (I wouldn’t know about mac ¯\(ツ)/¯) ### Unification vs Diversity # Or why “I wish everybody used Linux!" is probably not the wisest thing to say. To keep this short, if everybody used Linux, there’d be less incentive for Linux to compete with Windows. If everybody used Windows, M$ wouldn’t have incentive to further their OS. Boiled down, competition is a good thing.

That said, there are limits to how much diversity (in the context of computing) is a good thing too. If I search the Arch User Repository for “i3lock” there are 28 results + original they’re all forked (13) from. And, okay, not all 28 of those are actual forks, but you get the point. There’s a lot of work being duplicated on open source projects, instead of just everybody working together to make one, really good thing.

And, yeah, this provides more choices, but does anybody want to research 28 choices for anything to figure out which is best? Especially when most of them are super similar? With desktop environments on Linux at least each is typically novel enough to fun to look at, but if it’s something boring, like the given screen-locker example, really?

## Chapter 5: What Might Radically Change Things? #

### Body Modification and Bio-Engineering #

Another point is the idea of biohacking and body augmentation. The most common biohacks include implanted RFID tags (which I actually have) and magnets for sensing electromagnetic fields, but these are still pretty mundane. This Ted Talk I think shows what might be possible a bit better:

But I still think there’s room for a lot more. Last semester I was fortunate enough to take a class with Dr. Massimilliano Pierobon who is currently the director of the Molecular and Biochemical Telecommunications Lab (MBITE) at UNL, and while I’m far from knowledgeable enough to understand everything that they do there I know they’re doing some very interesting work that could be summarized as hacking the chemistry and existing networks in biological systems (inc. humans). Here’s some work from the MBITE lab I found particularly interesting:

Bi, D., Deng, Y., Pierobon, M., and Nallanathan, A. “Chemical Reactions-based Microfluidic Transmitter and Receiver Design for Molecular Communication," IEEE Transactions on Communications (Early Access), 10.1109/TCOMM.2020.2993633, May 2020. [PDF]

Marcone, A., Pierobon, M., and Magarini, M. “Parity-Check Coding Based on Genetic Circuits for Engineered Molecular Communication Between Biological Cells," IEEE Transactions on Communications, vol. 66, no. 12, pp. 6221-6236, December 2018. [PDF]

Hanisch, N., Pierobon, M. “Digital Modulation and Achievable Information Rates of Thru-body Haptic Communications,” In Proceedings of the SPIE International Conference on Defense + Security (DCS), April 2017. [PDF]

These articles, and others from the MBITE lab at UNL, can be found here

Seeing this makes me wonder if the future of bio-hacking might be a bit more tightly integrated into how our bodies already work, rather than just tossing some electronics inside some silicon or glass to implant somewhere.

### Brain Computer Interfaces #

I don’t know that BCI are really the future. Elon Musk is working on Neuralink which is neat and all, but I’m not sure I’m convinced. I would welcome the faster computer to brain link as the keyboard->eyes->brain loop is far to slow, but I don’t really see anyone going in to have their skull drilled into for elective surgery unless it’s to correct or treat something else neurologically. There have been efforts to do BCI without implants, but I suspect that would suffer from a lack of bandwidth.

I do very much hope to be wrong, as I think BCI has the opportunity to be the biggest leap humanity has ever taken. Soon, it may even be necessary to even deal with how quickly our would is changing and the amount of information we need to process every day.

Brain Computer Interface article on Wikipedia

## Wrapping up #

In all honestly I’m not exactly sure what everything I just wrote is about. Mostly it’s just a lot of ranting, but hopefully it has been interesting. To round things off with a bit of a closing note though, I don’t actually foresee many of the things I mentioned becoming common place or many even being possible, if not simply because they’re require so many people to agree on standards, but there is one glimmer of hope, and it’s one of proof of uniformity. The terminal. Yes. This terminal:

The terminal emulator above is still compatible with the VT220 from 1983 (as are most terminal emulators) yet from it, with a good shell (like ZSH) I can do everything I can really think of: browse the web, chat with friends, listen to music, basically anything. I’m not saying we should all stop using chrome, but I think part of the reason so many neck beards and sysadmins still use the terminal is you can do so much with it, everything that uses it as a common interface, and it has programming capabilities. You can automate or string together just about anything, exactly as I described above.

Finally, I’d like to say I understand we don’t all get the choice, be it by monetary, physical, or other restrictions, to have a ‘perfect’ work environment. If you live in the city there will be noise, If you live in the country, you may be limited by your internet connection, I get that. Obviously I don’t expect everyone to go out and make their own versions of some of the high tech, borderline art installations that I linked either. I also don’t think everyone’s down to go get an RFID tag in their hand. I just wanted to present what I see as ‘the future’. It probably won’t come in 2021 or even 20021. I do, however, hope this has inspired you to look at the way you work, the environment you work in, and how you can improve it.

### Other hardware and software pushing things forward #

[TODO]

https://hookproductivity.com – Link all the things

Atlas Informatics (TechCrunch Article) - search all the things

https://apse.io – a photographic memory of all the text that goes across your screen

https://desktopneo.com – a UI mockup for a better system

### Other people that have ranted about similar things, but usually a bit more politely #

A Proposal for a Flexible, Composable, Libre Desktop Environment (Michael McThrow)

What do I care the open web is dying? (Abhinav Sharma, Cofounder Insight. ex Mozilla & Facebook)

I hate computers: confessions of a sysadmin (TechCrunch)

if you have a link to add, feel free to tweet at me @Vega_DW

If you would like to support my development of OpGuides, please consider supporting me on Patreon or dropping me some spare change on Venmo @vegadeftwing - every little bit helps ❤️