<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://https://silvanocerza.com</id>
    <title>Silvano Cerza Blog and Thoughts</title>
    <updated>2025-12-10T16:39:20.351Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <author>
        <name>Silvano Cerza</name>
        <email>silvanocerza@gmail.com</email>
    </author>
    <link rel="alternate" href="https://https://silvanocerza.com"/>
    <subtitle>Blog posts and random thoughts about anything, mostly software development.</subtitle>
    <icon>https://https://silvanocerza.com/icon.png</icon>
    <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    <entry>
        <title type="html"><![CDATA[A rant on Ghibli slop]]></title>
        <id>a-rant-on-ghibli-slop</id>
        <link href="https://https://silvanocerza.com/post/a-rant-on-ghibli-slop"/>
        <updated>2025-03-30T10:34:28.000Z</updated>
        <content type="html"><![CDATA[<p>I started writing this as a simple thought, it quickly evolved into a rant of sort.</p>
<p>These past week OpenAI released their new image generation model, indeed amazing stuff, it can create and reason about images in way that classic diffusion models really can't.</p>
<p>Quickly people started generating slop upon slop of Studio Ghibli styled images.</p>
<p><img src="https://https://silvanocerza.com/images/angry-princess-mononoke.jpg" alt="Angry Princess Mononoke"></p>
<hr>
<p>All my socials where flooded with these images. Instagram was full of memes but Ghibli style basically adding no content to that. People were sharing their memories Ghibli style, again, adding no substance to those memories.</p>
<p>LinkedIn was a completely different beast, every single post had a Ghibli style image, even if it wasn't even relevant. This can probably just be my feed since I also work in the field, so I'm connected and following lots of people that create product for and with AI.</p>
<p>I want to focus a bit on LinkedIn because that's mainly the cause of this writing. Again since I'm in the field I'm connected to lots of founders, engineers, and "creators" that do the same, they work every day on some AI related project or product.</p>
<p>I use those tools every day too, mainly to help me be faster writing code, though what bothers me is not simply their use but the way they're used by some people. I see entire profiles sharing posts or articles with only AI generated images. Every. Single. Post.</p>
<p>I'm not talking about random people either, I'm talking company founders, people that spend their full time creating "disruptive" products they're selling to people. People that are VC funded, that got seeds round in the millions, people that are trying to create something new to break the market, people that want to break industries in a fundamental way.</p>
<p>Those people are sharing the Ghibli styled slop.</p>
<p>This made me think deeply about their actions and how it affects the world around them too. How can they build something disruptive, something completely new, something that will shake existing industries if they can't even bother with sharing original images with their writings. Are they even writing what they're sharing?</p>
<p>Is they're writing actually theirs? Writing is an amazing tool for me, it helps me reason about problems by putting things in order, it helps me share with thoughts with other people. It means a lot to me because I believe in what I write, everything you're reading here has been thought and written by me.</p>
<p>I do use AI though, as an editor and a final proofreader. English is not my main language, I make mistakes some times, so it's good to have a kind of editors that might find this or that grammatical error. But that's it, the rest is all mine.</p>
<p>I don't often use generated images for a post cover, the only image I used here is Princess Mononoke because it makes sense to use it in this case. It enhances the narrative of this post (rant?), it gives more meaning to it, if you've seen the movie and understood it you'll understand what I'm writing too. Hopefully.</p>
<p>This is the important point in my opinion. Are those generated slop images enhancing the content or are they making it worse by commoditising, thus lowering the value, of it?</p>
<p>Is content really worth if you didn't create it?</p>
<p>I'm also wondering how on Earth people are pouring so much money and trust on founders that claim to be able to change everything if they can't even manage to create something new for a stupid post on their main communication channel?</p>
<p>I'm tired of ranting for now.</p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="ghibli"/>
        <category term="slop"/>
        <category term="rant"/>
        <category term="#ai"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[A new way to sync your Obsidian vault]]></title>
        <id>a-new-way-to-sync-your-obsidian-vault</id>
        <link href="https://https://silvanocerza.com/post/a-new-way-to-sync-your-obsidian-vault"/>
        <updated>2025-03-28T11:59:13.000Z</updated>
        <content type="html"><![CDATA[<p>I've been working on this Obsidian plugin for quite some time and I'm really happy to say that it's finally been included in the list of community plugins!</p>
<p>You can find the sources and the install instructions in my GitHub profile over <a href="https://github.com/silvanocerza/github-gitless-sync/tree/main">here</a>. But let's delve a bit into how I came up with the idea for the plugin.</p>
<hr>
<p>If you have never heard of <a href="https://obsidian.md/">Obsidian</a> it's a desktop and mobile note taking app that you can extend with plugins written in Typescript. It's completely free and gained quite some popularity in the past years because of the <a href="https://help.obsidian.md/obsidian">philosophy behind it</a>.</p>
<h2>Why create a plugin?</h2>
<p>I started working on this plugin at the start of January. At the time I recently finished reworking my blog from using <a href="https://gohugo.io/">Hugo</a> to a custom solution — I'll talk about this in another blog post — so I needed an easy way to sync my Obsidian vault with a GitHub repository.</p>
<p>All my blog content is hosted in the <a href="https://github.com/silvanocerza/blog-sources"><code>blog-sources</code> repo</a> as Markdown files; even though I work every day with an IDE I didn't want to use one to edit my blog sources. Mainly because I don't find that comfortable editing prose with an IDE, and also because I wanted to edit my blog whenever and wherever I wanted, without the need of a desktop at hand.</p>
<p>Since I was using Obsidian more and more for note taking, both desktop and mobile, and love the high customisability of the product I wanted to leverage it to edit my blog content too. So I started to search for existing solutions to sync Obsidian with GitHub, though I didn't like what I found.</p>
<h2>State of the art</h2>
<p>There are obviously other people that had a similar idea and leverage Obsidian for their blog, though most of them relied on custom Bash scripts to push their vaults to GitHub. This wasn't good for me as it's not multi platform and I couldn't run a Bash script that easily on my iPhone. Also I wanted the sync action to be simple, just the click of a button, no need to run this or that script depending on your state.</p>
<p>So I looked around for existing community plugins, I actually found them. The first I stumbled upon was <a href="https://github.com/kevinmkchin/Obsidian-GitHub-Sync"><code>Obsidian-GitHub-Sync</code></a>, it relies on having the Git executable installed in the system. It builds the Git command internally and then executes it, that's fine for desktop though it doesn't support mobile obviously.</p>
<p>The most famous and used instead is <a href="https://github.com/Vinzent03/obsidian-git"><code>obsidian-git</code></a>, it's a more advanced plugin that integrates Obsidian with Git in a deeper way. It lets you see the history, commits, diffs, manage branches, and all the other cool things you can do with Git. This one too relies on Git being installed in the system, on mobile it relies on <a href="https://isomorphic-git.org/"><code>isomorphic-git</code></a> instead, that's a pure JS reimplementation of Git, but it's experimental and not that reliable.</p>
<p>So since I wanted something really simple without all the advanced Git features, and a solution that would work the same way on desktop and mobile I decided to try and solve the problem from a different direction.</p>
<h2>Git without Git</h2>
<p>Git was at the same time the most useful thing and most annoying thing. I wanted to to be able to manage a GitHub repository without the need of the Git executable, this lead me to explore the GitHub REST API documentation and find <a href="https://docs.github.com/en/rest/guides/using-the-rest-api-to-interact-with-your-git-database">this page</a>.</p>
<p>That's huge! I can access and edit a repository Git database from their REST API, it gives me what could be considered a plumbing interface. Git has <a href="https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain">two main interfaces</a>, porcelain, the shiny high level commands that most people use, and plumbing, the gritty commands that let you access the lower level internals.</p>
<p>I've been using Git via the command line since more than 10 years, I know my way around blobs, commits, reflogs and the lower level interface. This experience mainly comes from the numerous mistakes I made with Git, but that's how most people learnt the lower level parts I bet. You mess your repo in the most absurd way and then try to find a solution. But let's go back to the plugin.</p>
<p>With those endpoints to the Git database I could create my own "porcelain" in a way, this also means that I don't need a Git executable in the system that's running my plugin. All I need is being able to run HTTP requests, and that's what I've done. I would consider my <a href="https://github.com/silvanocerza/github-gitless-sync/blob/main/src/sync-manager.ts"><code>SyncManager</code> class</a> a sort of porcelain interface, it let's me run all the necessary operations to sync remote and local files with some "simple" commands.</p>
<p>There are some downsides to this approach obviously. Main one is that it's closely tied to GitHub, I can't sync my vault with GitLab, Gitea, or other Git hosting services. For the time being this is not a huge issue for me because of the way I use the plugin, though this kind of lock-in might not be something that every user will be happy with. I don't rule out adding support for other hosting services in the future, though it all depends whether they support similar REST APIs.</p>
<p>It's also limited by the capabilities of the API, mainly by the <a href="https://docs.github.com/en/rest/git/trees?apiVersion=2022-11-28#get-a-tree">get tree endpoint</a>, as the <code>tree</code> array field in its response is limited to 100000 entries with a maximum size of 7 MB, this when using the <code>recursive</code> parameter that I extensively use. Though this can be worked around by making more requests to retrieve the repository content folder by folder if it becomes an issue in the future. That's going to be slower obviously since it will require more requests, but at least it will support bigger vaults.</p>
<p>Another big issue was syncing multiple vaults with the same remote repository. Think of different vault as two different persons working on the same project and the same branch, if one pushes a change the other must first pull the change and then push their change. Though with Git it's "easier" to handle since you have more advanced functionalities to work with, my plugin has just a button so you need a way to handle these cases.</p>
<h2>Conflict resolution</h2>
<p>I wanted to give the user the possibility to see the conflicts a sync would cause and give them a way to resolve them. Automatic resolution is not good in my opinion, mainly because I'm scared of making wrong assumptions and resolve it in a way that loses data. I still give the user the chance to automatically resolve conflicts by always overwriting remote or local files, but that's a choice the user must consciously made, by default I always ask the user how to solve a conflict.</p>
<p>So I needed an interface to let the user visualise the sync conflicts. I used Meld for years, I think it's one of the most intuitive interface to resolve conflicts, it can clearly be understood by anyone. So I decided to build something similar.</p>
<p>Obsidian offers <a href="https://docs.obsidian.md/Plugins/User+interface/HTML+elements">some APIs</a> to create simple interfaces to create and add HTML elements in the UI. It's certainly useful to build simple interfaces, in fact I used it for my settings page, though it's not the best for complex UIs like the one I wanted to create. They actually know this and have simple guides in their developers documentation to use <a href="https://docs.obsidian.md/Plugins/Getting+started/Use+React+in+your+plugin">React</a> or <a href="https://docs.obsidian.md/Plugins/Getting+started/Use+Svelte+in+your+plugin">Svelte</a>, my choice fell on React as I'm more experienced with that. So I started building the interface.</p>
<p>Luckily Obsidian uses the <a href="https://codemirror.net/">CodeMirror library</a> too, an highly extensible code editor, so I didn't even need to chose an editor library for my conflicts resolution view. The documentation is really well written and contains lots of information, that was especially useful when trying to fix the code generated by Claude. The library has been around since mid 2018 and the latest major version has been released 3 years ago, so models have been trained with a LOT of code from old versions, that caused some frustrations when trying to understand which features where available to me.</p>
<p>In the end I managed to make it work like a wanted and created this nice split view. If you ever used Meld or an editor from IntelliJ this will look familiar for sure. It lets you see at a glance which lines have been modified, which have been added or removed, by pressing buttons you can choose this or that version, or you can directly edit the content.</p>
<p><img src="https://https://silvanocerza.com/images/split-conflict-resolution-view.png" alt="Split conflict resolution view"></p>
<p>This works great on desktop and bigger screens, though I wanted this plugin to work nicely on mobile and this interface would have been a nightmare to use on smaller screens. I bet at least once you had to use a UI that was clearly made for desktop and ported to mobile as is, or viceversa, that's terrible UX in my opinion. So I had to create a completely separate view for mobile.</p>
<p>It was clear that I could only have a single editor showing the conflicts so I could leverage as much screen space as I could. This time the my inspiration was the conflict resolution from <a href="https://code.visualstudio.com/docs/sourcecontrol/overview?trk=public_post_comment-text#_merge-conflicts">Visual Studio Code</a>, I'm not a great fan of it but it does the job.</p>
<p>I felt it was important this view should have the same features of the split view, highlighted diffs, buttons to quickly accept or discard changes, and directly editing the content. Simple actions the user can take are especially important on mobile in my opinion, you're already limited by not having a big screen and a keyboard, you shouldn't be limited by functionalities too.</p>
<p><img src="https://https://silvanocerza.com/images/unified-conflict-resolution-view.png" alt="Unified conflict resolution view"></p>
<p>I won't delve much more into the conflicts view, if you're interested in the internal and want to understand more deeply how I implemented them I suggest you take a look at the <a href="https://github.com/silvanocerza/github-gitless-sync/tree/03758866fb687f70ec24735b0be3c40856b2447f/src/views/conflicts-resolution">code here</a>.</p>
<h2>The future</h2>
<p>I would consider this the MVP of my plugin, it lets you sync your vaults and handle any conflict that might arise. Now that I released it officially my main goal is adding some feature that the <a href="https://obsidian.md/sync">official sync plugin from Obsidian</a> has. Selective sync, so you ignore certain folders or file types, and file history, so you can see how a file changed over time and rollback to an old version if necessary.</p>
<p>Though this is for the future, I spent quite some time on this and I'd like to work on some other projects of mine for a bit before coming back to those features. Obviously bug fixing and other maintenance work will still be done.</p>
<p>In the end I'm happy that I created something that's useful for me and also for other people, I already received great feedback and lots of curiosity from the community before even releasing it. It was also really fun to work on the project, I had to tackle some problems that I never stumbled upon and learned something new.</p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="obsidian"/>
        <category term="typescript"/>
        <category term="react"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[My Home Network Setup]]></title>
        <id>my-home-network-setup</id>
        <link href="https://https://silvanocerza.com/post/my-home-network-setup"/>
        <updated>2025-03-21T08:20:00.000Z</updated>
        <content type="html"><![CDATA[<p>I have a small PC at home that I use as a server to run some <a href="https://github.com/silvanocerza/personal-services">personal services</a> of mine, Jellyfin, Sonarr, QBitTorrent, and others. This is an outline of the current setup and some of the problems I faced.</p>
<hr>
<p>One of the things that always bothered me about running a server in my network is having to use the IP and port to access this or that service. So when I first set everything up, I had to find a solution to access each service using a different URL. After some searching and thinking I came up with a solution that satisfies me quite nicely.</p>
<h1>Equipment</h1>
<p>I'm using a <a href="https://www.gl-inet.com/products/gl-mt6000/">Flint 2</a> as my router, this gives me the chance to do some funky stuff in my network. It can easily run <a href="https://adguard.com/en/adguard-home/overview.html">AdGuard Home</a> to filter ads network wide, since it acts as a DNS server I can also rewrite some DNS queries. I can also run a <a href="https://www.wireguard.com/">WireGuard</a> server and/or client with ease, this is especially helpful to access my LAN when not at home.</p>
<p>The actual server is a <a href="https://www.minisforum.com/products/minisforum-nab6-nab9-nab6-lite-nab7-amz?variant=49516388778290">Minisforum NAB6</a> with an Intel i7-12650H CPU, 32GB RAM and 1 TB SSD. It's small enough that can fit anywhere, and powerful enough to handle 4K movies and multiple containers at the same time. It also has a couple of 2.5G ethernet ports, enough USB ports to plug extra peripherals like a Zigbee antenna, or extra storage. It was also quite cheap at around 500 Euro.</p>
<p>This is all the hardware I use to manage efficiently my LAN.</p>
<h1>Domains</h1>
<p>I find it EXTREMELY annoying having to write the server IP every time I want to access it, so my top priority was using a custom domain.</p>
<p>This is where AdGuard becomes really useful. Since it's a DNS server and I can define custom DNS rewrites with just a couple of clicks, every DNS query that exists through my LAN will first pass through AdGuard and will be rewritten if it matches any domain I'm interested in.</p>
<p><img src="https://https://silvanocerza.com/images/adguard-home-dns-rewrite.png" alt="AdGuard Home interface to configure DNS rewrites"></p>
<p>I decided to use two different domains, <code>rt.it</code> to access my router directly, and <code>tv.it</code> to access the server. I picked those for two main reasons:</p>
<ul>
<li>they're fast and easy to type</li>
<li>they're not assignable</li>
</ul>
<p>Some backstory might be helpful here.</p>
<p>By chance I found out that every two characters domain in the <code>.it</code> ccTLD is non-registrable. If I use any of those for my LAN I'll never risk that domain clashing with a domain registered by someone else. I could have used my <code>silvanocerza.com</code> domain to be fair, but I wanted a short one that I could type quickly.</p>
<p>Since <code>.it</code> is a ccTLD, Registro.it, the Italian national organisation responsible for assignment and management of the <code>.it</code> ccTLD domains, decided to completely ban registration for two letter domain between 1999 and 2000. This was done probably to protect geographic domains for Italian provinces, think <code>.mi.it</code> for Milan, <code>.rm.it</code> for Rome, etc.</p>
<p>Though there is two-letter <code>.it</code> domain in the wild. The company bought it early in the 1996 and still uses it, that is <a href="https://q8.it">q8.it</a>, owned obviously by Q8, the oil company. I wonder what would happen if they ever forget to renew their registration.</p>
<p>Some people might wrinkle their nose at this choice. Why not use one of the <a href="https://en.wikipedia.org/wiki/Top-level_domain#Reserved_domains">reserved TLDs</a> like <code>.local</code> or <code>.internal</code>? I indeed took them into consideration, but all of them are quite long, and I want a short URL. The only usable one would have been <code>.internal</code> as all the others have actual uses, <code>.local</code> is used for mDNS as an example.</p>
<p>All in all the choice I made works for the time being as I know it won't cause any clashes. This also ties really nicely with the way I set up my services.</p>
<h1>Services</h1>
<p>To run my services I decided to have a bunch of Docker compose files, each named after the service it defines. So Sonarr will be in <code>sonarr.yml</code>, QBitTorrent will be <code>qbittorrent.yml</code> and so on.</p>
<p>Since I have multiple services on my server I need to route different requests to different containers, to do this I decided to use <a href="https://doc.traefik.io/traefik/">Traefik</a> as an Application Proxy. It's easy to configure and adding new containers doesn't require any change in its configs. The configuration required is also pretty minimal.</p>
<pre><code class="language-traefik.yml">services:
  traefik:
    image: "traefik:v3.2"
    container_name: "traefik"
    command:
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entryPoints.web.address=:80"
    ports:
      - "80:80"
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    networks:
      - traefik-net

networks:
  traefik-net:
    name: traefik-net
</code></pre>
<p>As you can see I don't expose all containers by default using the <code>--providers.docker.exposedbydefault=false</code> command. I do this since I'm also running services that don't need to be accessible from the network but act as support to the others.</p>
<p>Like <a href="https://containrrr.dev/watchtower/">Watchtower</a>, which automatically updates the containers base images.</p>
<pre><code class="language-watchtower.yml">services:
  watchtower:
    image: "containrrr/watchtower"
    container_name: "watchtower"
    environment:
      - TZ=${TIMEZONE:-Etc/UTC}
      - WATCHTOWER_HTTP_API_TOKEN=${WATCHTOWER_API_TOKEN?error}
    command:
      - "--cleanup"
      - "--http-api-metrics"
    ports:
      - "8082:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
</code></pre>
<p>Since it doesn't need to be accessible from LAN I don't even set the network to <code>traefik-net</code>. Notice though I had to expose the host's Docker socket to both. Traefik needs that to listen Docker events in case new containers are added to its network, see how it's read-only since it just needs to listen. Watchtower instead needs write access too as it needs to update running containers.</p>
<p>In here, and all other services too, I'm also using <a href="https://docs.docker.com/compose/how-tos/environment-variables/variable-interpolation/">Docker Variable Interpolation</a> to manage secrets and variables that I keep in a <code>.env</code> file ignored by Git. This way I can easily share all my services definitions without leaking anything.</p>
<p>A typical service that is accessible from LAN instead will look quite different from the ones above, this is the one for Sonarr.</p>
<pre><code class="language-sonarr.yml">services:
  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    environment:
      - PUID=${PUID?error}
      - PGID=${PGID?error}
      - UMASK=${UMASK?error}
      - TZ=${TIMEZONE:-Etc/UTC}
    ports:
      - "8989:8989"
    volumes:
      - "${COMMON_STORAGE?error}/config/sonarr:/config"
      - "${COMMON_STORAGE?error}/data:/data"
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.sonarr.rule=Host(`sonarr.${MAIN_ENDPOINT?error}`)"
      - "traefik.http.routers.sonarr.entrypoints=web"
      - "traefik.http.services.sonarr.loadbalancer.server.port=8989"
      - "traefik.docker.network=traefik-net"
    restart: unless-stopped
    networks:
      - traefik-net

networks:
  traefik-net:
    external: true
</code></pre>
<p>For most of my services I use the images created by the amazing people at <a href="https://www.linuxserver.io/">linuxserver.io</a>, they really do an amazing job maintaining all that.</p>
<p>Some services need to store configs and other data, I keep that in a common directory in my server that I set with the <code>COMMON_STORAGE</code> variable. I do this because it makes it easy to edit configs if there needs be, but also because this way I don't lose them when I destroy a container.</p>
<p>The key part to expose the service to the LAN is the <code>labels</code> and <code>networks</code> fields.</p>
<p>Since I don't expose any service by default I need to set <code>traefik.enable=true</code> to let Traefik know that it can "use" the container.</p>
<p>The <code>traefik.http.routers.sonarr.rule=Host(`sonarr.${MAIN_ENDPOINT?error}`)</code>
is the most important label, this is the rule that tell Traefik when it must route requests to this container. In this case when it matches <code>sonarr.tv.it</code>. I used a variable here in case I want to easily change my domain in the future. There are obviously different <a href="https://doc.traefik.io/traefik/routing/routers/#rule">rules supported</a>, but this does the job for me.</p>
<p>If we're not explicit on which entrypoints the service should receive connections from it will use all the Traefik's default ones. With <code>traefik.http.routers.sonarr.entrypoints=web</code> we define only the <code>web</code> one that receives requests from port <code>80</code>, that is the port for HTTP.</p>
<p>By default no service ports are exposed to Traefik, so we must declare which one to use with <code>traefik.http.services.sonarr.loadbalancer.server.port=8989</code>. This matches the <code>ports</code> settings usually.</p>
<p><code>traefik.docker.network=traefik-net</code> shouldn't be necessary as it overrides the default network for that container, but I like to be explicit so I set in any case. If the container belongs to multiple networks this becomes necessary otherwise it will pick a random one and could cause routing issues.</p>
<p>This is just one of the services I defined, to see more examples checkout <a href="https://github.com/silvanocerza/personal-services">this repo</a>.</p>
<h1>Outside access</h1>
<p>Obviously these services become really useful when I can access them wherever I am, even though I'm mostly at home. So I needed some way to made the LAN available from my mobile devices, mainly my iPhone and MacBook.</p>
<p>There are obviously many different solutions, <a href="https://www.cloudflare.com/zero-trust/products/access/">Cloudflare Access</a>, <a href="https://ngrok.com/">ngrok</a>, or <a href="https://tailscale.com/">Tailscale</a>, to name a few. Though I didn't want to rely on a third party, even with a free tier available, as they like to move the goalposts and didn't want to risk having to change my setup at a moment's notice cause of the will of a random company.</p>
<p>As I said above my router can run both a WireGuard server and client with no issue so I decided to use that, I just have a couple of devices to configure so that didn't take much time. GL.iNet also provides some <a href="https://docs.gl-inet.com/router/en/4/tutorials/wireguard_server_access_to_client_lan_side/">nice guides</a> to help you setup everything if you're not sure on the correct path.</p>
<p><img src="https://https://silvanocerza.com/images/flint-2-vpn-dashboard.png" alt="Flint 2 VPN Dashboard"></p>
<p>A cool thing of this approach is that if in the future I want to build my own private VPN that exits from nodes around the world I can. I'll have to get some machines from AWS, GCP or Azure — probably not lol — scattered around the world and use them as WireGuard servers, while my Flint 2 acts as a single client. For the time being I can rely on existing VPNs if necessary.</p>
<p>The annoying thing though, is that I must use a static IP. I took into consideration <a href="https://en.wikipedia.org/wiki/Dynamic_DNS">Dynamic DNS</a> but I'm under that beautiful thing that is <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT">CGNAT</a> so it doesn't work reliably with WireGuard and I was forced to use a static IP. Luckily the process to ask for one from my ISP is automated and took no time to do so. The main downside is that now I need to be careful not to leak it, because probably it's a pain in the ass to change it.</p>
<h1>SSL</h1>
<p>You might have noticed I never mentioned SSL anywhere, simply because I'm not using it. All the services are accessible only if you have access to my LAN, so I feel comfortable doing without SSL in this case. If you already have access to my LAN I'm already done for in any case.</p>
<p>There's also the issue of generating certificates for the domains I'm using, since they're non registrable I can't create a certificate for them. It would also force me to expose parts of my LAN to the web to complete Let's Encrypt challenges.</p>
<p>It's obviously feasible to create SSL certificates for your LAN but that requires some extra work that I don't want to bother with, so for the time being I decided to ignore this. If in the future I'll need to expose some services I'll probably use my domain <code>silvanocerza.com</code> to make them available.</p>
<h1>Home Assistant problems</h1>
<p>Recently I also added <a href="https://www.home-assistant.io/">Home Assistant</a> to my services, since it likes to run using the host network I had some issues routing requests to its container with Traefik and it required some extra care.</p>
<p>When you configure a service with <code>network_mode: host</code> you can't set any other network for that service. I tried doing without <code>network_mode: host</code> and keep the Home Assistant container in the Traefik network but it was giving me way too many problems: some integrations wouldn't work at all, devices wouldn't be easily accessible, etc.</p>
<p>Keeping a container in the Traefik network is what makes it accessible with a custom domain though, so it was quite of a problem having Home Assistant on a separate network. After some experimenting I found a solution that works quite nicely, it just requires an extra container running Nginx.</p>
<pre><code class="language-home-assistant.yml">services:
  home-assistant:
    image: lscr.io/linuxserver/homeassistant:latest
    container_name: home-assistant
    environment:
      - PUID=${PUID?error}
      - PGID=${PGID?error}
      - TZ=${TIMEZONE:-Etc/UTC}
    volumes:
      - ${COMMON_STORAGE?error}/config/home-assistant:/config
    restart: unless-stopped
    privileged: true
    network_mode: "host"

  home-assistant-proxy:
    image: nginx:alpine
    container_name: home-assistant-proxy
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    networks:
      - traefik-net
    extra_hosts:
      - "host.docker.internal:host-gateway"
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.home-assistant.rule=Host(`home.${MAIN_ENDPOINT?error}`)"
      - "traefik.http.routers.home-assistant.entrypoints=web"
      - "traefik.http.services.home-assistant.loadbalancer.server.port=80"
      - "traefik.docker.network=traefik-net"

networks:
  traefik-net:
    external: true
</code></pre>
<p>This is what I came up with, Home Assistant running in its own container using the host network, while Nginx runs on a separate container that is part of Traefik's network. The labels are similar to the ones used by other services, though they're set on the proxy instead of the actual service. The main difference is the <code>extra_hosts</code> field, that is what makes Nginx able to route to Home Assistant as it gives it access to the Docker's host network. That is in turn used in the <code>nginx.conf</code> defined as follow.</p>
<pre><code class="language-nginx.conf">server {
    listen 80;

    location / {
        proxy_pass http://host.docker.internal:8123;

        # Essential headers for proper proxying
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (needed for Home Assistant)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";

        # Longer timeouts for long-lived connections
        proxy_read_timeout 90s;
        proxy_connect_timeout 90s;
        proxy_send_timeout 90s;

        # Disable buffering for event streams
        proxy_buffering off;
    }
}
</code></pre>
<p>This is a pretty minimal setup for Nginx with some extras for Home Assistant but it does the trick and solve the routing issue.</p>
<p>Last but not least you must add this to your Home Assistant <code>configuration.yaml</code>, this authorises connections coming from the Nginx proxy and the rest of the Docker network. To find the correct IP I used <code>docker network inspect traefik-net</code>.</p>
<pre><code class="language-configuration.yaml">http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 172.18.0.0/16
</code></pre>
<p>Between the different possible solutions that I took into consideration, like using only the network hosts for all my services, this is the one that seems the cleanest and satisfies me the most.</p>
<h1>Future improvements</h1>
<p>This is the current setup, though I already have some ideas to improve it.</p>
<p>As of now to add a new service I need to write a Docker compose file, but I'd like to have some kind of interface to automate that. I'm thinking of creating a service with a web UI that lets me do that. Ideally it can also double as a monitoring dashboard for all the services, where I can jump to this or that service with just a click.</p>
<p>There are similar solutions like <a href="https://dockerdashboard.github.io/">Docker Dashboard</a>, <a href="https://dashy.to/">Dashy</a>, and <a href="https://gethomepage.dev/">Homepage</a>, that I'm currently using, but their customisation require editing lots of config file that I don't really want to bother with, or their use case doesn't completely fills my needs. And in the end I'd like to have some fun trying to create something similar.</p>
<p>Though the most urgent improvement I need to focus on is the hardware, I need more storage with some failsafes. The current plan is to get an Hard Disk bay and use it as Direct Attached Storage to my server. I don't need a NAS, the server is already connected to the network and I don't want to manage a separate server for that.</p>
<p>In the hardware department I should a get UPS too, it's never fun if there are network outages when you're not at home and can't turn things back on because of that.</p>
<p>In any case this isn't even its final form, I'll keep posting updates when I improve the setup or stumble upon any issue.</p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="networking"/>
        <category term="home-server"/>
        <category term="self-hosting"/>
        <category term="docker"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[A simple way to run only one process per application in C++]]></title>
        <id>a-simple-way-to-run-only-one-process-per-application-in-c++</id>
        <link href="https://https://silvanocerza.com/post/a-simple-way-to-run-only-one-process-per-application-in-c++"/>
        <updated>2019-04-10T15:51:49.000Z</updated>
        <content type="html"><![CDATA[<p>Some times it might be necessary to limit the number of processes running at the same time for a certain application. There might be several reasons for this, to prevent data corruption for example. This is a simple cross-platform way to do it.</p>
<hr>
<p>We're going to use C++ 17 and the new <a href="https://en.cppreference.com/w/cpp/filesystem">filesystem library</a>, this will permit us to minimize the use of <code>ifdef</code> since we won't have to use the POSIX or Windows APIs that much.</p>
<p>Our solution will work roughly like this:</p>
<ol>
<li>On launch read the lock file to check if there are other processes running</li>
<li>Write the current process ID into the lock file</li>
<li>On close remove the current process ID from the lock file</li>
</ol>
<p>In case our applications crashes obviously it won't be able to remove its process ID from the lock file but the solution we're going to use won't have to worry about that.</p>
<p>We'll get the includes out of the way right now, so if you want to try the code after each step you'll have no issue. These all are you need:</p>
<pre><code class="language-cpp">#include &#x3C;algorithm>
#include &#x3C;filesystem>
#include &#x3C;fstream>
#include &#x3C;string>
#include &#x3C;vector>

#ifdef _MSC_VER
#include &#x3C;Windows.h>
#include &#x3C;Psapi.h>
#include &#x3C;tchar.h>
#else
#include &#x3C;unistd.h>
#endif
</code></pre>
<h3>Lock File &#x26; Process ID</h3>
<p>First of all we'll need to reliably retrieve the path of our lock file, and keep using that all the time. This is where we'll use the new filesystem library, luckily it offers us a function that returns a path to the OS temporary directory. We can name our file however we want, but it's important that its name is unique and doesn't change at runtime.</p>
<pre><code class="language-cpp">std::string lockFilePath()
{
    static std::string file = std::filesystem::temp_directory_path().string() + "/MyLockFile";
    return file;
}
</code></pre>
<p>As you might have noticed we declared our <code>file</code> variable as <code>static</code>, this will initialize the variable only the first time that piece of code is executed and never do it again during the life time of the program. You can find more information on static local variables on the <a href="https://en.cppreference.com/w/cpp/language/storage_duration#Static_local_variables">official documentation</a>.</p>
<p>Now we need to know our current process ID, to do that we must use the OS APIs and is one of the few spots where we have to use <code>ifdef</code>, on Windows we'll use the PSAPI, on the other platforms we use POSIX.</p>
<pre><code class="language-cpp">int processId()
{
#ifdef _MSC_VER
    return GetCurrentProcessId();
#else
    return ::getpid();
#endif
}
</code></pre>
<p>There's nothing magical about this function, it returns the process ID and that's it.</p>
<h3>Lock and Unlock</h3>
<p>The two previous functions are enough to handle the lock and unlock steps.
The locking is pretty straightforward, open the file, append the current process ID and close the file.</p>
<pre><code class="language-cpp">// Writes current app PID to lock file
void lockProcess()
{
    std::fstream lockFile(lockFilePath(), std::ios::out | std::ios::app);
    lockFile &#x3C;&#x3C; processId() &#x3C;&#x3C; std::endl;
    lockFile.close();
}
</code></pre>
<p>The unlock is a bit more complicated instead; it has to open the lock file, read all the process IDs, remove the current process ID and write back the others.</p>
<pre><code class="language-cpp">// Removes current PID from lock file
void unlockProcess()
{
    std::fstream lockFile;
    lockFile.open(lockFilePath(), std::ios::in);

    std::vector&#x3C;std::string> ids;
    std::string id;
    while (std::getline(lockFile, id)) {
        if (std::stoi(id) != processId()) {
            ids.push_back(id);
        }
    }
    lockFile.close();
    lockFile.open(lockFilePath(), std::ios::out | std::ios::trunc);
    for (const auto&#x26; id : ids) {
        lockFile &#x3C;&#x3C; id &#x3C;&#x3C; std::endl;
    }
    lockFile.close();
}
</code></pre>
<p>This are the building blocks to handle our lock file, but we need to put it to use or it would be kind of pointless to just save the running processes of our application.</p>
<h3>Am I Alone?</h3>
<p>Our goal is to verify if there are other processes of our application running so we'll use the lock file to do just that. Our <code>isOnlyInstance</code> function will return whether there is another process running by searching in the currently running processes if there is one with the same ID as the ones in the lock file.</p>
<pre><code class="language-cpp">// Returns wether there is another instance of the app with different PID running
bool isOnlyInstance()
{
    std::fstream lockFile(lockFilePath(), std::ios::in);
    std::vector&#x3C;std::string> ids;
    std::string id;
    while (std::getline(lockFile, id)) {
        ids.push_back(id);
    }
    lockFile.close();

    auto procs = processList();
    for (auto id : ids) {
        if (std::find(procs.cbegin(), procs.cend(), id) != procs.cend()) {
            return false;
        }
    }
    return true;
}
</code></pre>
<p>Now we need to implement our <code>processList</code> function, as its name clearly says it returns a list of running processes. This is another spot for <code>ifdefs</code>.</p>
<p>On Windows we're using the <a href="https://docs.microsoft.com/en-us/windows/desktop/api/Psapi/nf-psapi-enumprocesses">EnumProcesses</a> function from the PSAPI, it expects a <code>DWORD[]</code> that will be filled with the running process IDs, the size of that array and a <code>LPDWORD</code>, that is a <code>DWORD*</code>, that will contain the number of bytes returned in the processes array. A <code>DWORD</code> is a Windows <code>typedef</code> for <code>unsigned long</code>. We then iterate the array of IDs and return it as a vector of strings.</p>
<p>Instead on other platforms we take advantage of <code>/proc</code>, that contains a folder named after each running process ID, who themselves will contain the informations of that process. We won't need more informations other the IDs so we just iterate the directories in <code>/proc</code>, push into our vector and return.</p>
<p>Here we're using again the <code>filesystem</code> library, notice how convenient it is to iterate a directory content using a <a href="https://en.cppreference.com/w/cpp/filesystem/directory_iterator">directory_iterator</a>. <code>p</code> in this case is a <a href="https://en.cppreference.com/w/cpp/filesystem/directory_entry">directory_entry</a> that we convert to <a href="https://en.cppreference.com/w/cpp/filesystem/path">path</a> to retrieve the directory name, the process ID in this case.</p>
<pre><code class="language-cpp">std::vector&#x3C;std::string> processList()
{
#ifdef _MSC_VER
    DWORD aProcesses[1024], cbNeeded;

    // Returns zero on failure but we ignore because we're brave enough
    EnumProcesses(aProcesses, sizeof(aProcesses), &#x26;cbNeeded);

    DWORD cProcesses = cbNeeded / sizeof(DWORD);

    std::vector&#x3C;std::string> result;
    for (DWORD i = 0; i &#x3C; cProcesses; i++) {
        result.push_back(std::to_string(aProcesses[i]));
    }
    return result;
#else
    std::vector&#x3C;std::string> processes;
    for (const auto&#x26; p : std::filesystem::directory_iterator("/proc")) {
        processes.push_back(p.path().filename());
    }
    return processes;
#endif
}
</code></pre>
<h3>Usage</h3>
<p>Now that we have all the pieces we can put our mini library to use.</p>
<pre><code class="language-cpp">int main()
{
    std::cout &#x3C;&#x3C; processId() &#x3C;&#x3C; std::endl;
    if (!isOnlyInstance()) {
        std::cout &#x3C;&#x3C; "Another process is running" &#x3C;&#x3C; std::endl;
    }
    lockProcess();

#ifdef _MSC_VER
	Sleep(5000);
#else
    ::sleep(5);
#endif
    unlockProcess();
    return 0;
}
</code></pre>
<p>This is just a minimal example, it prints the process ID and notifies you if another process is running, then locks the current one and waits 5 seconds before unlocking and terminating.</p>
<p>What about crashes though?</p>
<h3>Crash and Burn!</h3>
<p>The code as it is fine and working, we could stop here and call it a day. The problem is that in case of crashes some IDs might be stuck in our lock file until the next reboot, if the user rarely reboots and the app crashes several times --- I hope it doesn't --- our lock file might grow a bit. This should be rare but nonetheless we can handle that by cleaning the lock file ourselves.</p>
<p>Our cleanup function will read all the IDs from the lock file and rewrite it removing those not currently running. To use it we can just call it atop our <code>main</code> and that's it, but it's not strictly necessary.</p>
<pre><code class="language-cpp">// Removes PIDs from lock file that are not running anymore, this mainly cleanups after
// crashes since unlockProcess would not be called
void cleanLockFile()
{
    std::fstream lockFile;
    lockFile.open(lockFilePath(), std::ios::in);
    std::vector&#x3C;std::string> ids;
    std::string id;
    while (std::getline(lockFile, id)) {
        ids.push_back(id);
    }
    lockFile.close();

    lockFile.open(lockFilePath(), std::ios::out | std::ios::trunc);
    auto procs = processList();
    for (auto id : ids) {
        if (std::find(procs.cbegin(), procs.cend(), id) != procs.cend()) {
            lockFile &#x3C;&#x3C; id &#x3C;&#x3C; std::endl;
        }
    }
    lockFile.close();
}
</code></pre>
<h3>Final thoughts</h3>
<p>Now we're pretty much done, that's nothing much to do here, you can find an example project on <a href="https://github.com/silvanocerza/locker">Github</a>. Feel free to use it for your project, the license is pretty permissive.</p>
<p>Some of you might ask why we went through all the trouble of saving our process IDs to a file instead of searching for a process with the same name, that wouldn't be the most efficient way though. Neither Windows' PSAPI nor POSIX offer a way to get a list of processes given its name, so we should have iterated all processes every time we open our application to know if there was another one running. Using the lock file mechanism we know right away if another instance is running, if it's empty just let it be, instead if there is one or more IDs we verify if at least one of them is the list of running process IDs.</p>
<p>Thank you for reading this trough the end, I hope you enjoyed. If you have anything to say feel free to leave a comment.</p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="cpp"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[]]></title>
        <id>duck-type-aws-vpc-guide</id>
        <link href="https://https://silvanocerza.com/post/duck-type-aws-vpc-guide"/>
        <updated>2025-06-04T07:17:38.000Z</updated>
        <content type="html"><![CDATA[<p>I just read this great guide from Duck Typed about AWS Virtual Private Cloud (VPC), it explains in really simple terms what a VPC is and why it's necessary.</p>
<p>The illustrations complement the port perfectly too. 🖌️</p>
<p>By the way it's part of a series of posts that explore other AWS networking features, I suggest those too.</p>
<p><a href="https://www.ducktyped.org/p/why-is-it-called-a-cloud-if-its-not">https://www.ducktyped.org/p/why-is-it-called-a-cloud-if-its-not</a></p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="aws"/>
        <category term="networking"/>
        <category term="guide"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[]]></title>
        <id>on-using-this-thing</id>
        <link href="https://https://silvanocerza.com/post/on-using-this-thing"/>
        <updated>2025-05-16T15:28:49.000Z</updated>
        <content type="html"><![CDATA[<p>I rewrote my blogging system from scratch thinking that I would use it more, and it kinda happened at the start, I wrote some new posts right away.</p>
<p>Though right now I find myself not writing as much, I'm not even reading that much these times to be fair.</p>
<p>I kinda feel I miss an important part of system to truly use it as I want it.</p>
<p>I need to find the time to complete it.</p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="blog"/>
        <category term="random"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[]]></title>
        <id>safari-ios-clipboard</id>
        <link href="https://https://silvanocerza.com/post/safari-ios-clipboard"/>
        <updated>2025-04-16T14:02:01.000Z</updated>
        <content type="html"><![CDATA[<p>Stupid Safari and iOS forcing me to do this to copy text to clipboard. 😩</p>
<pre><code class="language-typescript">export async function copyToClipboard(text: string) {
  try {
    await navigator.clipboard.writeText(text);
  } catch (err) {
    // Fallback for devices like iOS that don't support Clipboard API
    const textarea = document.createElement("textarea");
    textarea.value = text;
    textarea.setAttribute("readonly", "");
    textarea.style.position = "absolute";
    textarea.style.left = "-9999px";
    document.body.appendChild(textarea);

    textarea.select();
    document.execCommand("copy");
    document.body.removeChild(textarea);
  }
}
</code></pre>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="clipboard"/>
        <category term="javascript"/>
        <category term="typescript"/>
        <category term="iOS"/>
        <category term="safari"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[]]></title>
        <id>defending-from-phaas</id>
        <link href="https://https://silvanocerza.com/post/defending-from-phaas"/>
        <updated>2025-03-10T15:04:18.000Z</updated>
        <content type="html"><![CDATA[<p>Nice writeup by <a href="https://gubello.me">Luigi Gubello</a> regarding Phishing-as-a-service platform and how to defend against them.</p>
<p><a href="https://gubello.me/blog/threat-model-phaas-platform-abuses/">https://gubello.me/blog/threat-model-phaas-platform-abuses/</a></p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <category term="cybersecurity"/>
        <category term="phaas"/>
        <category term="threat-modeling"/>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
    <entry>
        <title type="html"><![CDATA[]]></title>
        <id>first-thought</id>
        <link href="https://https://silvanocerza.com/post/first-thought"/>
        <updated>2025-01-23T14:13:00.000Z</updated>
        <content type="html"><![CDATA[<p>Let's see if this works 👀</p>]]></content>
        <author>
            <name>Silvano Cerza</name>
            <email>silvanocerza@gmail.com</email>
        </author>
        <rights>This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.</rights>
    </entry>
</feed>