<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Podman | blog.moe.ph</title><link>https://blog.moe.ph/tags/podman/</link><atom:link href="https://blog.moe.ph/tags/podman/index.xml" rel="self" type="application/rss+xml"/><description>Podman</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 30 Aug 2019 00:00:00 +0000</lastBuildDate><item><title>King Traefik 2.0, Queen Django, and Jack passHostHeader</title><link>https://blog.moe.ph/blog/king-traefik-20-queen-django-and-jack-passhostheader/</link><pubDate>Fri, 30 Aug 2019 00:00:00 +0000</pubDate><guid>https://blog.moe.ph/blog/king-traefik-20-queen-django-and-jack-passhostheader/</guid><description>&lt;p&gt;Deploying Django application means deploying it behind Nginx webserver most of the time. We are also using this setup. Deploying it this way is easy, since you need to set &lt;i&gt;proxy_pass&lt;/i&gt; and &lt;i&gt;proxy_set_header&lt;/i&gt;.&lt;/p&gt;
&lt;p&gt;The only problem with this setup is when you only have 1 Nginx webserver for multiple Django backend. When you deploy new Django apps, you need to add new setting for it and restart the server. You might be able to reload it by sending SIGHUP to Nginx master process, but we are using container, so it&amp;rsquo;s hassle. You also need to reload it every time you edited something.&lt;/p&gt;
&lt;p&gt;That’s when we look into alternative reverse proxy. We look for more cloud-native proxy, and that’s when we found about Traefik. Traefik is a cloud native edge router. It has integrations into multiple cluster technology. It also supports live reload of configuration files for dynamic settings. We are using podman to manage containers, so we are utilizing the dynamic file reload.&lt;/p&gt;
&lt;p&gt;We start playing with Traefik, and use version 2. Tinkering with Traefik is not that easy if you’re not familiar with the earlier version.&lt;/p&gt;
&lt;p&gt;The setting that really ticked our testing is passing the proxy host header to backend. The first settings that we look through are middleware headers settings of Traefik. This is the most logical place to look first, if you are playing with headers, right? We played with these settings.&lt;/p&gt;
&lt;p&gt;Seems okay, but it didn’t actually work. The host header is still not pass to the backend. I hold some grudge to Traefik this time, then I realized that I need to check the settings for earlier version. There are Traefik settings scattered through Github, so I found this boolean setting.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;passHostHeader = true
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;I guess this setting should be present to Traefik 2.0. After checking the dynamic configuration reference, I found this setting.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-gdscript3" data-lang="gdscript3"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;Service01&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;loadBalancer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;passHostHeader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;I set this to true and the backend is now responding.&lt;/p&gt;
&lt;p&gt;I guess everyone should know about this setting.&lt;/p&gt;</description></item><item><title>Podman, Containerization, and a Poor Guy</title><link>https://blog.moe.ph/blog/podman-containerization-and-a-poor-guy/</link><pubDate>Tue, 20 Aug 2019 00:00:00 +0000</pubDate><guid>https://blog.moe.ph/blog/podman-containerization-and-a-poor-guy/</guid><description>&lt;p&gt;Deploying Kubernetes, Docker, and cloud-native apps needs large type of servers, but what if you are a poor guy like me and just want to use containerization technology on small scale servers? One thing that can help you is Podman.&lt;/p&gt;
&lt;p&gt;
. Ahh, what a great daemonless container engine. Daemonless? Yes. It means that it has minimal footprint on the server, giving much power to those containers.&lt;/p&gt;
&lt;p&gt;We currently use Podman to manage our apps on a 4GB/2vCPU Fedora 30 Linode. As of now, we are comfortably running these apps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reverse-proxy Nginx&lt;/li&gt;
&lt;li&gt;Multiple Django sites&lt;/li&gt;
&lt;li&gt;Tor relay node&lt;/li&gt;
&lt;li&gt;Ipfs node&lt;/li&gt;
&lt;li&gt;Syncthing node&lt;/li&gt;
&lt;li&gt;Development containers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You might wonder, but running those multiple containers still consume a lot of resources. This is right, but one other great thing about container is resource limiting. I just simply limit the resources on all production containers to 2GB/1vCPU. I did this by running my containers as a service, then set-up an environment file with this configuration.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[root@prod ~]# grep -i EnvironmentFile /etc/systemd/system/cnginx.service
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EnvironmentFile=-/etc/sysconfig/podman
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[root@prod ~]# cat /etc/sysconfig/podman
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PODMAN_OPTIONS=\&amp;#34;--rm -a stdout -a stderr -m 2g --cpus 1\&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;This is to ensure that no one would hog everything, rendering others non-responsive. I also set them as a service to automate container restart if something happens.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[root@prod ~]# grep -i restart /etc/systemd/system/cnginx.service
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Restart=always
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;RestartSec=15s&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;There are instances that podman would kill container due to OOM, but restore the service by automated restart. If there’s multiple case of OOM, you will need to adjust the resources accordingly.&lt;/p&gt;&lt;p&gt;After using podman for several months now, I just love it. I wish that they would be able to push their binaries to Debian repository.&lt;/p&gt;</description></item></channel></rss>