- Change. (permalink)
Shifting groups of people is the same, in my experience, as personal changes.
They happen only in two ways. First, is the acute, when something big happens, so big that is life threatening and elicits a last-minute response. That's not generally desirable.
The second is bit by bit, and can be frustrating - that we can't create lasting revolutions any other way, but it is human. It works this way:
1) Pick the smallest unit of improvement. Or more precisely - one that you are sure that you can achieve and sustain with regularity. When in doubt, go smaller.
2) Make it as brain-dead simple as possible. Remove friction. Don't overthink it.
3) Track progress. Again, simplicity is the key, just make sure you are doing it. The primary reason to track progress is accountability and...
4) ...Reward success (and vice-versa, disincentivize non-adherence).
5) Have increments as goals. Do not just track the progress, set the desired slope as an objective.
This habit forming stuff is from the little I know, a pretty established thing. There is a "microhabits" book that I have not read but I bet teaches in many more words a similar lesson. What might or might not be interesting is the fact that these kinds of things, which are usually framed around self improvement, work in groups/teams/societies too.
Sat, 13 Jul 2024 14:23:16 -0700
- How should motionblur blur? (permalink)
Today I was watching a video of someone doing motionblur via JFA for velocity dilation (https://www.youtube.com/watch?v=m_KvYlYF3sA), and it made me think. In computer graphics you often get this sense, when you start thinking about something to any degree of depth, that you didn't really get it - yes it's not you just, we are still broken in so many ways that it is easy to land somewhere that you'd think has been fully solved...
but at a closer look it's all wrong.
What disturbed me about motionblur this time? In the video, the author makes some claims about the right "direction" of the blur - i.e. should it be "centered" around an object or not.
At first I thought, obviously it should be "centered", you are capturing the object motion from a given time to another (exposure / shutter speed), there is no "preference" of where a "trail" should be.
Then you start thinking, well perhaps though there is something to be said in the chain of approximation we have... We typically take velocities by backward differentiation, the position at the current frame minus the position at the previous frame, as of course we can easily remember the last position data and it's easy to compute velocities this way in an engine.
But then we use this data to "integrate" along a line that is "centered" at the current frame position. This is clearly not right, isn't it? Albeit I doubt it's a major source of errors, it's something to think about.
But then you think... Why can't I then match the blur with the velocity "correctly"? Well, you can't because you are banking on the fact that the color at the current frame is in a way the most correct, but why? See, when you look at photographic motionblur, objects always appear "centered": consider an object moving at a constant linear speed, and consider how it's seen throughout an exposure. The object overlaps with itself as it's moving, but the center of the movement (where the object is at half-time during the exposure) is where you get most overlaps, as the object does not keep recording a "streak" as it's "leaving" the exposure.
This makes sense, and if you consider how this "streak" extends when you look at the frames before and after any given one, it should be clear this "works". In more mathematical terms, we are dealing with good old aliasing, as this is a sampling problem (temporal sampling, but still sampling, we are discretizing into frames a continuous signal), and we are implicitly using a box filter.
But why? In real cameras of course, you don't have maths and a simulation, you have to work within the confines of whatever technology you have to make a shutter and record onto a medium. But in digital-land we could do anything.
So the question should be not how to simulate (an idealized model of) a camera, but how to do something that works best with human perception. And who knows what that is? I don't have the time to sift google scholar to see if someone investigated the question, but I wouldn't be surprised if we didn't.
Maybe I should turn this into a proper blog post, with images and all. And could be talking also of the other things we know already we aren't doing great w/motionblur and realistically what still could be tried. One big thing? Why are we limited to a single velocity? I always thought we should at least get some basis approximation of the velocity distributions that affect a given pixel, I think Jorge might even have tried when we were doing the "nextgen posteffects" on COD but I don't think anything shipped, might mis-remember.
Mon, 8 Jul 2024 16:34:21 -0700
- Ghost in the shell. (permalink)
I started using computers very early in my life. When I was single-digit old I was taught basic on a c64 and was writing tiny programs displaying graphics or trying to be simple games.
I got one of the first internet contracts when a provider opened in my city (I seem to remember there was some incremental number and I had something less than a hundred). Thus, I lived a lot of my life online, on IRC first, then ICQ, MSN... does anybody remember CU-SeeMe?
I'd reckon that if I tried to find and scrape everything that computers have on me, in various forms, it would be a ton of information, an involuntary (albeit, not regretted) lifelog. And of course, today we can easily record everything about us, the problem if any / for some, is how to avoid that, not the opposite.
All these are data points from which to interpolate a simulacrum. The deep machine learning models we see today capture the manifold of human experience. We are not that different, person to person, not in the grand scheme of things, feed to a model some data to make it "understand" your uniqueness and I'd bet it would do a pretty good job at being you.
Not quite the singularity we wanted...
I don't have particular feelings about it. On one hand, this is the monkey paw version of immortality, a Chinese room certainly not alive, not conscious and not "you" - but that could one day be good enough to fool the onlookers.
On the other hand, as for deep learning in general, I find it fascinating that we have the ability today to compress information and retrieve it in such ways that we can carry and conveniently query humanity's output in our pockets, be it in images, text, audio, or in this case, an individual's history.
And it would be undeniably fascinating if we could interact, say, with notable persons' stories in a direct way instead of through biographies.
One way or another though, I think this will be science fiction for a while longer, if ever. Not because we couldn't accomplish it, I believe we can, but because it all sounds too creepy for a company to be created around the idea. This seems the kind of thing that would appeal mostly to the Moron Musk of the world, not most of us.
Fri, 5 Jul 2024 14:12:22 -0700
- Intentionality (permalink)
Sometimes people ask me what I learned as a manager, as a tech director and so on. What is my philosophy when it comes to steering projects and people as opposed to making code and math.
I often say that I have few universal rules, that I dislike any simplistic theory of management that devolves in a list things and techniques to use.
Any solution needs to be embedded in the specifics of a problem, otherwise you're probably dealing with something so trivial it's not even worth theorizing about.
Context often can be distilled down to a list of priorities, things you care about, things you are willing to trade-off, things that are the opposite of what you care about. If you have these, you can align your solutions - metrics, incentives, disincentives, whatever you are producing.
My pet peeves are always when people want "everything" "the best" - the inability to narrow what matters and what does not, or when these priorities are only spoken, not embodied concretely.
All of the above is something I've known for a while now, I've elaborated and can speak to in a variety of different settings...
For example, I believe this actually works even down to code. I maintain that the secrets of the mythical "10x" programmers (not "cowboy coders", but "real" 10x), is in the ability of tackling the right problems, not in their typing speed (sorry, vim/dvorak/ergo keyboard users).
What I realized now is that perhaps "intentionality" could be the best word, if a single one needed to be chosen, for all of this.
p.s. This can be applied recursively: once you know what you want, you can subdivide each item and further specify. That's not very interesting. The most important thing is to apply the principle to itself, i.e. to know when to stop.
A cardinal sin of management, and one that often comes from the kind of "universal principles" (beliefs really) that I despise, is to manage for management's sake. If you are doing something, if you are reaching out for a tool, you have to know why, what's your hope. And you should err on the side of being conservative - a "do no harm" approach.
Lastly, if intentionality in management and bureocracy is important everywhere, it is the most important when hiring. Not just in chosing the attributes of a good hire, but in knowing what positions you need to begin with. People, on average, tend to do their job, the way they know how. Hiring somone for something you don't need is not just a waste, it is actively damaging as energy will be spent on counter-productive work.
Companies are often, and rightfully, concerned about hiring the wrong person. This carefulness should be extended to hiring from the wrong jobs.
Thu, 4 Jul 2024 15:17:43 -0700
- Raspberry Pi home config: UPDATE! (permalink)
Last journal post made me want to try again to get a decent file server running. I ended up uninstalling Samba, as that was a fail, and installing copyparty. It is fantastic! 10/10 would reccomend.
It's minimalistic, even a bit janky (in fact I'm not sure I installed it exactly "right", it's a python script and you have to manually add a .service and all related configuration, users, directories, plus certain features work only if given dependencies happen to be there...) but it works great.
Note: https://github.com/9001/copyparty/blob/hovudstraum/contrib/systemd/copyparty.service instructions almost worked, but I needed to set RestrictSUIDSGID to false for thumbnails to show up, otherwise I could see in the logs that it was trying to create directories for the thumbnails and failing to do so. Posted a ticket on github, not sure if I did something wrong or the instructions need to be amended.
In fact, I'd even say it would have been a great way to self-host a blog! Just have copyparty with a read-only public interface, and upload files/changes... And copyparty also can function as a FTP server, so no need to have vsftp setup! I haven't done that yet, but I could also direct the web server (lighttpd) to serve a directory that copyparty also sees - and would allow me to quickly test web stuff.
All of this almost makes me consider selfhosting for the couple of other things I currently use free-tier services, namely, feedly as an RSS aggregator, and pocket to save links I want to read later from my various browsers/devices - but so far these work just great and I don't see any risk of being locked-in (in fact I migrated my RSS aggregator already once, feedly replaced the defunct google reader), so I don't think I'll bother for now.
BONUS: I love parsec! I use it a lot to access all my random machines, its main issue is that it does not work on iOS. Might one day consider tailscale in order to replace that, tailscale creates a VPN easily/for free, and then you can operate things that are build for local networks - in this case I'd use it for moonlight(-stream), but once one has such a setup I guess it becomes convenient to route everything through it, instead of exposing services to the public internet...
Lastly, I run a DLNA server to stream movies to my TV and to an oculus quest. Similar to when I started looking for self-hosting dropbox, I was again stumped at the fact that there don't seem to be simple, tiny solutions for this, in the end the most minimal setup that worked well was an Universal Media Server instance on a windows laptop that I use almost never, and I boot only when needed, controlling again via Parsec. This machine is also the one I use to occasionally backup my stuff over a few HDDs, manually, a couple of times a year.
Mon, 24 Jun 2024 12:22:40 -0700
- Raspberry Pi home config. (permalink)
I think I just lost my wsl2 machine learning configuration (I wanted to post a journal entry for it exactly so I wouldn't end up here... oh well), so I guess it's a good time to at least "save" my rPI setup as of today.
I actually wrote most of the following in /etc/motd, so it was easy to retrieve. It's running on a rPI 3, I also have a 4 but it is just automating some downloads at work, unfortunately to migrate I needed to re-do all the setup and there is no good reason for me to do that right now.
The rPI started as a way for me as a personal file store, only for files I don't mind losing nor for them to become public, I'm far from a sysadmin. In the end it effectively failed at that (I use the rPI for other things now), but I still occasionally use the FTP and it helps to keep my dropbox light.
- SMB, FTP (vsftpd) - the latter exposed to the internet (open ports in my router)
-- FTP and SMB serve the same files. As SMB by default is "global" while FTP is per-user this required mounting the SSD also (fstab... bind) to the user's $home/FTP - and some permission trickery.
-- External SSD configured w/TRIM - in the end I gave up on exfat (SMB issues), it's using ext4. I wanted exfat originally because I planned to be able to easily unplug the SSD and connect it to laptops, but in practice I never do that anyways so... who cares.
-- I also have an http server (lighttpd), and TBH if I were to redo it all, I would settle on FTP and HTTP access to the same directory, ignore SMB.
From my samba.conf
...
[SSD]
path = /mnt/SSD
writeable = yes
browseable = yes
create mask = 0777
directory mask = 0777
public = no
From my vsftp.conf
chroot_local_user=YES
user_sub_token=$USER
local_root=/home/$USER/FTP
#chroot_local_user=YES
#chroot_list_enable=YES
allow_writeable_chroot=YES
From my notes in /etc/fstab:
# external ext4 (exfat has problems with samba, slow due to not supporting fallocate) SSD hard drive
#  for exfat set up for rw access for everyone umask=111, for ext4 just chown -R  pi:pi the mount point
PARTUUID=1375cf2e-a9d8-475f-b282-6712bcfb5640 /mnt/SSD ext4 defaults,noatime,nofail,nodiratime 0 1
SMB was needed because on OSX the native FTP connection is read-only. In the end, mac SMB is so bad (slow) that I don't use that either, so I could kill that and make my setup easier next time. On Windows it appears to be much faster but it was somewhat flaky. Samba is really not made to work over the internet.
I now use filezilla or winscp, and on iOS there is a FTP browser that is actually free & brilliant: FTPManager.
I know there are systems that can give you a self-hosted dropbox alternative (ownCloud, nextcloud, syncthing), but they are all big, bloated, ugly - I looked quite a bit at what is out there, found nothing I want to run, learn how to setup etc.
Should try some simple web-based file browsers instead - copyparty and gossa seem more like what I'd like, next time...
- I use duckdns to post the dynamic IP to DNS, so I can access everything over the internet.
- SSH is also exposed to the internet, for obvious reasons, I only changed the default port via my router for a pretend thin veil of extra safety.
-- I installed fail2ban for some degree of "security" - had to configure it for nftables as raspbian uses that and in the end it's doing... something, I'm not sure I got it 100% right, but can't hurt
-- Can be used from VS Code via the "remote SSH" extension! This is KEY! It's a game changer as it allows me to develop remotely just as it was a local install, and I keep all my website/blog stuff there. Posting on my blog involves logging into my home rPi and triggering some scripts.
The rPI is also helping my home wifi to some degree, namely:
- Pi-Hole! DNS and DHCP (to force all devices to use the DNS...)
-- My router's DHCP is DISABLED!
-- This is currently the only service I use for the home. I also have a rPI camera module so in theory I can trigger it, but in practice I never use that.
I also did a few things that I don't actually know if they are that useful or not... but are part of good rPI practices it seems:
- Log2Ram is installed to reduce the write load on the SD card
-- Had to ensure logrotate and journald were set to reduce the size of the logs...
-- Forced systemd-journald to write log to memory, reducing SD writes (check w/sudo glances)
- Also disabled the swapfile, added noatime and commit=900 to fstab / mount
-- Made the system fsck at boot the / partition (via tune2fs)
I guess I could/should perhaps just redirect all of these things to the SSD, but I don't know how wise that would be either, as logs are quite a fundamental part of linux and the SSD is an external device that in theory could have its own issues etc. Not that it ever caused me grief, but neither the microSD card I'm using has... The entire system is so low-traffic, I dunno. Also rPI 5 supports proper SSDs on board so this is not even a concern for the future...
- A backup is scheduled via crontab and https://github.com/lzkelley/bkup_rpimage
- Should perform updates automatically (via unattended-upgrades), so far this seems to be working correctly too...
- The entire setup is console only (no x11 etc, I think I achived this by removing one of the core x11 libraries and from there all dependents)
- Also removed avahi-daemon, could remove bluetooth and wifi but did not bother...
- Setup only for a single user so far, did not bother to create anything by my own admin user...
I could send logs to Logz.io (free plan, rsyslog) - but did not, it is an interesting option. Could enable the hw watchdog to reboot if it detects a crash, but I never needed that so far... KnockD (port knocking) also is something that seems quite cute, but again, couldn't bother, the main attraction of this whole setup is that it is easy to access from everywhere and as simple as possible, if a motivated hacker was to compromise it, I could not care less, it does not host anything of value.
- Some packages I added:
-- mc, micro, tmux, tldr, smartmontools, weechat, googler, glances, nnn, exa, ag, fd, http
-- youtube-dl, yt-dlp (via pip3), bsdgames
-- tmux commands: new, ls, a(ttach) ... Ctrl+b d = detach!
VScode + piHole make this setup really useful for me. But in all honesty, if I were to upgrade it in the future, I'd probably move away from the rPI and use a small x64-computer-on-a-stick instead or even a mini PC. The rPI is still not really a good solution for anything in particular in my life, I just happened to have one lying around and now, after many many years, it finally found a way to make itself useful :)
You can notice that there is zero smart home automation stuff, as I consider 99% of the smart-home, IoT stuff to be worthless, dangerous junk :)
Sat, 22 Jun 2024 12:22:40 -0700
- Be honest with yourself. (permalink)
It's ok to reinvent the wheel. It's ok to love the act of programming for programming's sake. It's great to learn new stuff. But try to be honest with yourself, I promise, it will help you.
Most people I see writing small custom engines (and you can replace "engine" with anything else) try to rationalize their choices as ones of utility.
It's said that superstition is born out of our brain's need to find causal relationships, which biases us towards inventing fantasy when we can't find obvious ones.
In a similar vein, I think engineers' brains are wired towards the delusion of being able to find optimal solutions to their problems, i.e. to be purely rational machines, and so when we tend to live in the illusion that what we do always, objectively, matters.
It's ok to have fun! It's ok to even have mindless fun, it's definitely ok to learn, to have hobbies and so on. And if you admit to yourself that perhaps that you are reinventing the wheel because you like it, not because it is going to save the world, it has a few added bonuses:
1) You might be less depressed, as you have aligned your expected outcomes to the reality of things 2) You might understand what you really like. Maybe you never wanted to make a game, maybe you enjoy building tools, or maybe you just want to learn something new.
3) You might dedicate the right amount of effort to the task, both in the sense of not consuming too much time if you end up honestly evaluating that you should not, and in the sense of spending the time you decide to allocate in a guilt-free way.
I started writing this entry after stumbling upon https://legendofworlds.com/blog/4 - I read webgpu, wasm, multithreading, rust, ecs... while seeing screenshots of a something 2d-zelda like and my blood levels of grumpiness immediately raised - but I don't intend to point fingers, in fact the opposite!
Bless people who want to reinvent the world, and tinker with tech. I do it myself, and I don't even pretend to be able to always know where I am in between false rationalizations and real ability to assess the right amount of energy to put into something based on my objectives, knowledge, and areas of productivity.
But being aware is half of the battle. And forgiving yourself is one of the secrets to a happy life. I like pens, not because they make me a better writer. I collect cameras, not because they make me a better photographer (in fact, the more cameras, the worse you are going to be). I fix old computers that I know I won't really use. And yes, I even write code at times just because I like it. And it's ok.
p.s.
Years ago Leonard Ritter / Paniq blocked me over similar suggestions that his game (that I was backing) would never ship, and that he might find more happiness in making a Patreon as everyone would just love to follow his crazy experiments regardless.
Still today, I don't think I was wrong, but I know it was not kind - and I wish to apologize. Making games is hard, nobody needs people criticizing you when you are already doing one of the hardest things.
Mon, 17 Jun 2024 12:22:40 -0700
- The rise of the console-indie. (permalink)
It's interesting how engine-wars go. During the ps3 era, Unreal managed to edge out the competition by being among the first engines you can license that aggressively targeted the console market (and probably helped by how hard ps3 development was, especially initially).
Then the age of Unity came - AAA developers consolidated more and more in a few big studios/publishers who could afford to create ad-hoc technology for their titles, eliminating the need of licensing engines. Unity was first to understand the importance of new markets, of "3d for everyone", indies, mobile and so on...
For the longest time I thought that was it. Commercial engines make sense only at the "bottom end", where creatives are plentiful but small. And that the winner overall would not be about the engine, but the whole ecosystem i.e. cloud services.
Epic was safe, in this scenario, due to its pivot to the metaverse, fueled by Fortnite, i.e. reaching out for the next obvious step towards expanding the availability of the real-time 3d media to a broad base of creators. UGC et al.
But now, looking at the xbox showcase, at the sony showcase... most games are on Unreal 5! This is quite impressive, I don't know how much it matters in monetary terms, but this generation of console is much friendlier to indie/AA, in fact, most of the spotlight is on them, even more so than when xbox invented xbla! We reached a point where there are only a handful of megaproductions, the gigantic blockbusters, and so most of games on offer on console are... not that, numerically.
Interesting.
Fri, 14 Jun 2024 14:01:25 -0700
- All political discourse is populist. (permalink)
One thing that I ponder sometimes is the role of the person of culture. What to do once you have enough knowledge to understand the nuance of the world - the basic fact that no complex problem has simple solutions?
On HN, one of the most hotly debated subjects is the housing/crime/drug crisis, surely because most of that audience lives in the bay area and other affluent bigtech cities, that inevitably trend all towards the same problems.
And you always see the same camps. The "crime is bad mmkay", people who offer their sympathy to the neglected, but not at any personal expense, and the ones who justify any action in what they perceive as a fundamentally unjust society. Then lots of "debate" ensues - in quotes, because in the end, there is no real difference.
Everyone falls for the same basic instincts - solutions that are simple, painless, and do not involve them. It's the 1% problem, just prohibit ownership of houses for investment, just build more and so on and so forth.
As the economy is not a connected affair. As any action that causes houses to lose value wouldn't affect everyone. As landlords are making extraordinary ROI over their properties. As, fundamentally, is not a proven mathematical fact that higher wealth causes larger disparities.
But maybe we'll solve society with an app, or AI, or bitcoins. Democratize bullshit.
Social media, "the algorithm", likes et al - make populism faster, make it the only option to have reach, as nobody clicks like on nuanced opinions. But it has always been there, it is a byproduct of democracy, it is why politicians have to lie, even beyond the inevitable compromises that are necessary once you do not receive 100% of the votes.
This is a mad ramble - and it doesn't really matter, the specifics above were just an example. The wider problem, the meta, I haven't resolved in myself yet, is what to do. Once you know enough of the world to see the bullshit in either side of populist arguments. While at the same time, in the good old Machiavellian sense, you also are aware of the inevitability - no - the necessity of such arguments.
I doubt any inch of social progress has been attained on the basis of pure rationality. You need the passion, the youth, to move forward, even stumbling in idiotic arguments along the way. At the same time, it's impossible to partake, one can bite its tongue and not start pedantic arguments when the ends justify the means, but to enthusiastically espouse positions that one knows to be incorrect, only because of their inevitability - knowing they will be mediated down towards something reasonable (in the best case) or that anyways, society is always a messy affair - requires some kind of sociopathy I have not yet evolved into.
A conundrum. I'll let you know if I figure it out one day.
Sun, 9 Jun 2024 14:34:22 -0700
- Tidying up (permalink)
I have a few old PC laptops and many more emulators, of various systems on various systems. I wrote a bit about it here https://c0de517e.blogspot.com/2022/04/dos-nostalgia-on-using-modern-dos.html
Thing is, I don't end up using these things much. There is more pleasure in setting up things and researching and so on, than in the actual day to day usage.
I think this is a pretty common thing. At its worst it's a form of procrastination, where doing mundane tasks makes us feel productive even if our output is not different than scrolling cat videos on instagram.
Busy work.
But as a hobby, as a pastime, I think it's perfectly fine. Especially if one can resist consumerism - which is an unfortunate side effect. Setting up, researching stuff often leads to collectionism, "gear acquisition syndrome" is what photographers call it, where the cult of the object becomes a pursuit of its own, even if the objects themselves are not really enjoyed past their accumulation.
FWIW - I try to control that by self imposing some amount of usage before getting the next thing, by keeping lists of things I'd like - instead of buying them, and by selling stuff.
I wonder if there is some fancy German word to describe the pleasure of aimlessly tidying up.
Related: https://store.steampowered.com/app/1629520/A_Little_to_the_Left/
Wed, 5 Jun 2024 13:11:10 -0700