[{"content":"Introduction As you might, or might not be aware, in my day to day job, I am a manager / coach for a group of highly technical experts. I am happy that I am able to be their manager and being able to coach them on a daily basis. This means I strive to improve every day myself so that I can give my best version to them. And how can I ask of my colleagues to grow if I am not growing as well?\nOne can grow by doing courses, learning new things, or for example discover yourself more and more.\nOne of those methods for me that worked out pretty well is the NLP Practitioner course. NLP for me is a highly skilled course (not a training) that enables to you start becoming an excellent communicator and observer. I find this important, things I say have weight, and things that I observe and hear, have a meaning as well. Being able to interpret them at the right levels, means I can relate easier to what is going on and align where possible.\nWhat is NLP? NLP stands for Neuro-Linguistic-Programming:\nThe Neuro represents the neurology, the brain, how do we think, where do we think, what makes me me? How is information interpreted?\nLinguistic is the language, and then specifically the art of the language. How are my words interpreted? How do I avoid unclear sentences ? What if I get a push back, did I push too hard myself?\nProgramming stands obviously for \u0026lsquo;programming\u0026rsquo; yourself. What can I do to alter the way I communicate? If I know that the other has a preference to visualize things, would it then help if I talk about feelings? No I need to level up to the other so that we speak the same language.\nSounds simple eh? Well, before you can become successful in this, I needed a long running course.\nWhy find ways to discover yourself? As previously certified Solution Focussed Coach, I know that as coach you should not have any bias for any of your clients (or in my case colleagues), at all, ever. Period. Why? Well, if you have an opinion about the other already, how can you remain curious to the reasons of the other and find the solution for them within themselves? Having said that, everyone has certain bias\u0026rsquo;es and it is time to set that aside, the world will become a better place if we are all curious to the other.\nThere the challenge comes in, because if you do not know what your personal strengths and weaknesses are in real life, how can do know if you have a bias against something or someone?\nWhat do you learn? Within the NLP Practitioner training of BDPTraining, you will learn yourself quite well. You will see things you always took for granted, and know you can throw it through the waste bin from now on. You will learn how your brains work, like where the hippocampus lies and that it is the cinema of your brains (amongst other uses the hippocampus has), how you can train your brain to become more resourceful. You will do practical hands on training with other curious people that become your friends in relatively short time. You see yourself struggling and knowing that others struggle there too.\nOne give away that I\u0026rsquo;ll give you from this post: problems only exist in your head, no one else has your problems. So stop doing them in your head and get rid of them. If someone else has a problem, it is up to you to accept it (and have the problem as well), or just keep it with the other.. I know what I prefer to do!\nSo what is next? The course was about 7 blocks, divided over 4 months. After block 1 I mentioned at home that I should do the masters as well. I will continue practising on a daily basis whenever a situation occurs that I can use my skills in. Not in an obsessive damaging way, but in a way to connect to the other person better.\nConclusion This is one of the longest courses that I took, one of the most intensive courses that I ever had. But also the course that really challenged me and made me curious about other topics regarding the brain and human activity around it. I am pondering several educations that strengthen this knowledge even further. I am not going to promise that I will be doing them, but I am investigating.\nOh, and I found this one of the best courses I ever had, that really enriched me as person, improves me as husband and father, enables me as manager and coach, makes me a better friend, and hopefully others agree with that, and else.. not my problem\u0026hellip;\n","permalink":"https://www.evilcoder.org/posts/2025-12-13-certified-nlp-practitioner/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eAs you might, or might not be aware, in my day to day job, I am a manager / coach for\na group of highly technical experts. I am happy that I am able to be their manager\nand being able to coach them on a daily basis. This means I strive to improve every day myself so that I can give my best version to them. And how can I ask of my colleagues to grow if I am not growing as well?\u003c/p\u003e","title":"Certified as NLP Practitioner"},{"content":" Dear Puc,\nI am not a big fan of writing about personal items, for you I make an exception. You changed my world. I deeply miss you already.\nSince 2014 you had been our companion. Our first dog, our sweetest dog. Named after \u0026ldquo;Pug\u0026rdquo; the magician from Raymond E. Feist, you were our pride. You came to our house almost 11 years ago to the date (23rd of november 2014). We were so proud to bring you back home. We redecorated the house for you with rugs all over the bottom floor. When we came home, or downstairs you were there waiting for us, impatiently and happy to see us, always without any opinions, just happy and in for a cuddle!\nImmediately from the start you took our hearts and were part of the family. You played with our son Bram, took his socks from him, and had great joy doing so. Obviously Bram found something of that and came after you to get back his socks. You just ran and had fun with him.\nWith Denise you made long walks when she was at home before and after her pregnancy, you were with her and sat next to her when she needed a rest. You both made many miles! You were a real help to her and always there for her. She trained you, you were so eager to learn always, and to work for your two biggest gifts: a cookie, and the most important one, hugs!\nWith Luca you had a very sweet moment, when he came back from a school camp you ran to him (we could not hold you), and jumped on him placing your legs around his neck and gave him a big hug. You missed him for sure! It was a nice moment to see and we will always remember it (we have the pictures).\nWhen our youngest child, Julia, was born, you welcomed her with a lick on her feet, and sat next to her, protecting her as it was your own child. She could do anything with you, from horse riding to just lay with you on the couch, you found it all OK. She still mentions that she was your biggest friend.\nYou were naughty, energetic, very sweet, attentative, and stole the hearts of everyone you have ever met. You had an ever lasting grin on your face showing that you were relaxed, and happy. You had that for almost your entire life! You did not let go of people easily, you had to hug them and they needed to pet you. With those things, you really changed a lot of lifes!\nSince you were part of our family, you were never away from us more then a day. Someone was always close to you, you went with us on holidays, especially the ones on Texel stood out, where you had seen the ocean for the first time, and we could barely hold you because this is what you wanted! And we gave it to you a few times. You also liked to jump in the ice cold waters in Belgium, sit with us in the bath in Groningen, walk through the forests in Germany (you actually took me up the hills you know, with your strength and enthusiasm to see what was behind the top).\nYou were never afraid of the fireworks, we helped you in your younger years to not be scared, and till the last fireworks sparked, you were never ever afraid of it.\nYou learned that when we said: \u0026ldquo;Run\u0026rdquo; you took your leash in your mouth and started to run as hard as you could. You outran everyone easily and then you just ran back and forth. You had so much energy always and always wanted to come along outside. You loved traveling in the car, you were already in the car sometimes before the blink of an eye. As if you wanted to say: I am there, I am waiting for you, lets go!\nSwiming was a big passion of you, you once took a swim and almost crossed to the other end of the \u0026ldquo;Waal\u0026rdquo;, I got instructed to drop my clothes and swim after you, because it didn\u0026rsquo;t look like you wanted to return. After some calling and waiting patiently, you returned to the end we were on. As if you wanted to say: I just inspected the other end, it was fun and now I am back.\nYou also were always with us in the house, when we were in the kitchen, you were there as well. Not in the most convient place ofcourse, but visible, feelable and well, you could not be missed. If friends come over to play a game, you were under the table, lying closely to them or even on their feet. If the weather was hot outside, you lay on the loungechairs, like a princess, unable to miss you. Or when the kids had the pool setup, you wanted to play with the water as well. A water playing game? you were in it!\nI become quite ill in 2016, I could barely walk and my world was spinning around constantly. I could not do much, but the one thing that I had to do frequently, is walk with you. I heard from the physians, that normally elderly who have this, recover in +/- 2 years. You helped me recover in a year. When I walked all around the roads and almost walked into the creeks, you were next to me, preventing me from tripping in, and going around all those people that thought I was drunk. I was not, but my illness made it look like it. You were the supporting one always.\nIn that period, our cat Max came to your life, you could not get any puppies, but still for some reason you had to change someones life again. You began to give milk, and Max drank this from you. The both of you walked happily outside together. Max always hid from you and then jumped from behind a tree or something and you always got scared a bit. And then happily continued your journey. it is a thing we will never forget!\nDuring the period after, you slowly developed artrosis, we tried a lot of medications, but somehow, it did not actually help that much. What helped is with the help of Natasha, an animal chiropractor, you could do many more things, even without medication. What a life safer!\n2 years ago, we adopted a second dog, Saar. You strongly and surely helped to raise her. You played with her, eventhough it was not easy for you sometimes, and helped her when she did things that were not allowed. Then a year ago, Lotje came into our life, and you helped her grow up as well.\nBut we also saw you slowly taking steps back. You became more and more held back, you started sleeping more often and a few weeks ago you started drinking more and more, and also peeing more. You could not handle the two young dogs and showed your boundaries much quicker (you did not actually have them at all!).\nLooking back, the period that you became worse, grew quicker then we imagined. We decided to take you to the vets to see what appeared to be not entirely you anymore. The grin on your face was gone, you seemed tired, and sad. The verdict was quite surprising, you looked healthy from the outside, except that you could not stand easily, which was likely due to the artrosis. But we decided to do a blood test as well.\nUnexpectedly, the result was that you were severe diabetic. We learned that on friday, and we needed to decide what to do next. With a lot of grief we decided that we should let you go. Not because we wanted to, but we saw that we could never help you get back to who you were. Doing a very intensive support, would have made your life more difficult as well.\nAnd now I write this \u0026rsquo;letter\u0026rsquo; to you as a rememberance, you, who I and we all in the family loved dearly.\nDear Puc, you had a great heart, and with heavy heart, a lot of love and tears, we had let you go. We were all with you, Julia, Bram, Luca, Denise, Saar, Lotje and myself. We supported you on the most important trip you were ever going to make, without us.\nYou will be in our hearts forever, sleep well my dearly beloved friend that was always happy when I or we come home whenever time it was at the day. We miss you! We love you!\nRest in peace, we will meet again! Remko \u0026amp; fam\n","permalink":"https://www.evilcoder.org/posts/2025-11-25-in-memoriam-puc/","summary":"\u003cimg src='/images/puc.png' /\u003e\n\u003cp\u003eDear Puc,\u003c/p\u003e\n\u003cp\u003eI am not a big fan of writing about personal items, for you I make an\nexception. You changed my world. I deeply miss you already.\u003c/p\u003e\n\u003cp\u003eSince 2014 you had been our companion. Our first dog, our sweetest dog.\nNamed after \u0026ldquo;Pug\u0026rdquo; the magician from Raymond E. Feist, you were our pride.\nYou came to our house almost 11 years ago to the date (23rd of november\n2014). We were so proud to bring you back home. We redecorated the house\nfor you with rugs all over the bottom floor. When we came home, or downstairs\nyou were there waiting for us, impatiently and happy to see us, always\nwithout any opinions, just happy and in for a cuddle!\u003c/p\u003e","title":"In Memoriam Puc"},{"content":"Introduction When I worked for Snow, I once a year visited the BSD Conferences and went to BSDCan one time. There are equal kind of conferences for VMware related groups, like the VMware User Group (VMUG).\nThis year the VMUG-NL Conference had been hosted in Den Bosch, in the 1391 venu.\nHow was it? It was good to see fellow colleague\u0026rsquo;s heading over there, you talk about different things then at work and in general you have fun together.\nThe venue is more then large enough, with enough stands and facilities including food and beverages. The food was partially processed locally as we were told by one of the hostess.\nAlso, somewhere I saw an old colleague from my time at a government manicupality walking around.\nI found several good presentations, the opening was interesting, the talk from Pure Storage was interesting:\nPure Storage Business Continuity and Disaster Recovery of VMware Private Cloud Several sessions from VMware regarding the future of VCF and vSAN were not that great. The midday keynote was a bit over the top. I think the content itself was great but the showplay around it was just not needed. We even left early because of that. That included the following:\nVMware\u0026rsquo;s vision for storage and data protection in vSAN and VCF 9 vSAN in VCF Operations: monitoring and performance troubleshooting Disaster Recovery in the Broadcom world: Setup, Configure \u0026amp; Manage The best presentation of the day was\nBig Game Hunting: Ransomware’s High-Stakes War on Enterprises Overall it was a nice day to see and meet fellow minded people. The organisation did a great job in getting people together, sometimes things do not entirely come out as expected, but that could be more a pointer to the presenters.\nWho, admittedly stood there, and I was not, so they have a point ahead :-)\n","permalink":"https://www.evilcoder.org/posts/2025-03-12-vmug/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eWhen I worked for Snow, I once a year visited the BSD Conferences and went\nto BSDCan one time. There are equal kind of conferences for VMware related\ngroups, like the VMware User Group (VMUG).\u003c/p\u003e\n\u003cp\u003eThis year the VMUG-NL Conference had been hosted in Den Bosch, in the 1391\nvenu.\u003c/p\u003e\n\u003ch2 id=\"how-was-it\"\u003eHow was it?\u003c/h2\u003e\n\u003cp\u003eIt was good to see fellow colleague\u0026rsquo;s heading over there, you talk about different\nthings then at work and in general you have fun together.\u003c/p\u003e","title":"VMUG Netherlands"},{"content":"Introduction At work, we have a central configuration build, that we pick parts and pieces from for our \u0026lsquo;deployments\u0026rsquo;. As you can read in my resume I am working as a Virtualisation Engineer. There I am building most of the deployment code. Our base structure looks at our vCenters as primary key.\nWhat does that mean? Well, if you take a vCenter as main key, then everything that builds a vCenter (ESXi hosts, clusters, distributed switches, storage, etc.) are parts under the vCenter configuration. So you target a vCenter and not specifically the hosts underneath a vCenter.\nYeah so? That means, that you need to do inflexible loops to go over the storage items, or ESXi hosts. Imagine that both storage and ESXi are address since somehow they related, then you might need to do two loops to get the implementation done. That is fine with a couple of hosts and small storage, but if you need to do that en-masse, that is inflexible. You cannot do easy loops as you can do in Python or Powershell for example.\nSo bottomline, this takes a bunch of time to process and when you deploy something, you want it as quick as possible.\nTo the rescue I do not refer to Ansible\u0026rsquo;s Rescue mode. Block,rescue,always, you\u0026rsquo;ll know this if you done the course ;-). But I got a tip recently from one of my colleague\u0026rsquo;s from the Linux team, that there is this concept of \u0026lsquo;virtual host groups\u0026rsquo; in Ansible. (ansible.builtin.add_host). So you can do some loops from the configuration and build virtual host objects and add them to a virtual group.\nIf you then rewrite parts of your \u0026lsquo;sequential\u0026rsquo; playbook into smaller subsections and put them in an own play (basically still the same but then started from a different play). You can target the virtually created group in one go, which without limit just pushes it to as many (virtual) host objects as possible.\nSo instead of sequentially looping over each ESXi host and then storage. You can get all ESXi hosts, create the configuration for it that you need put it in a virtual ESXi group, and run your play against it. If one of the config items is a large configuration block for storage, you can then loop over that, or if you restucture it smartly, you might be able to use different \u0026lsquo;primary keys\u0026rsquo; to smash the data against. This saves at least one slow iteration, and in my case it speeds up a large part of the building blocks by a factor of 10 (reducing the implementation time hugely).\nHow does that look like? Again I cannot show how we do that at work, but given a certain configuration structure like:\nconfiguration: cluster: - name: clusterA hosts: - name: HostA ip: 127.0.0.1 description: This is host A under cluster A - name: HostB ip: 127.0.0.2 description: This is host B under cluster A - name: ClusterB hosts: - name: HostC ip: 127.0.0.3 description: This is host C under cluster B - name: HostD ip: 127.0.0.4 description: This is host D under cluster B storage: hosts: - name: StorageA fqdn: storage-a.your.domain datastores: - name: datastoreA size: 1GB amount: 10 - name: datastoreB size: 10GB amount: 10 You could have a playbook that has:\n--- - name: Build storage hosts: localhost gather_facts: false tasks: - name: Get all data ansible.builtin.debug: msg: - \u0026#34;storagename: {{ storage.0.name }}\u0026#34; - \u0026#34;datastorename: {{ storage.1.name }} with size: {{ storage.1.size }} and how many times {{ storage.1.amount }}\u0026#34; loop_control: loop_var: storage loop: \u0026#34;{{ query(\u0026#39;subelements\u0026#39;, configuration.storage.hosts, \u0026#39;datastores\u0026#39;) }}\u0026#34; This will result in:\n[...] ok: [localhost] =\u0026gt; (item=[{\u0026#39;name\u0026#39;: \u0026#39;StorageA\u0026#39;, \u0026#39;fqdn\u0026#39;: \u0026#39;storage-a.your.domain\u0026#39;}, {\u0026#39;name\u0026#39;: \u0026#39;datastoreA\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;1GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}]) =\u0026gt; { \u0026#34;msg\u0026#34;: [ \u0026#34;storagename: StorageA\u0026#34;, \u0026#34;datastorename: datastoreA with size: 1GB and how many times 10\u0026#34; ] } ok: [localhost] =\u0026gt; (item=[{\u0026#39;name\u0026#39;: \u0026#39;StorageA\u0026#39;, \u0026#39;fqdn\u0026#39;: \u0026#39;storage-a.your.domain\u0026#39;}, {\u0026#39;name\u0026#39;: \u0026#39;datastoreB\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;10GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}]) =\u0026gt; { \u0026#34;msg\u0026#34;: [ \u0026#34;storagename: StorageA\u0026#34;, \u0026#34;datastorename: datastoreB with size: 10GB and how many times 10\u0026#34; ] } But, if you need to do something with the hosts as well, you cannot navigate to that, because that is on a different level/path in the configuration.\nSo you might need to do another loop and include a task file to target these hosts with the data from the loop above.\nOne can also get a list of all hosts, so if you add the following to the deploy yaml:\n- name: Get all nodes ansible.builtin.debug: msg: - \u0026#34;clustername: {{ cluster.0.name }}\u0026#34; - \u0026#34;hostname: {{ cluster.1.name }}\u0026#34; loop_control: loop_var: cluster loop: \u0026#34;{{ query(\u0026#39;subelements\u0026#39;, configuration.cluster, \u0026#39;hosts\u0026#39;) }}\u0026#34; Then you will also have a list of clusters and nodes underneath that cluster.\nIf you then take the data and create a specific hostconfiguration (below is a dummy, you should be able to see the vision behind it, or contact me if not ;-)):\n--- - name: Build storage hosts: localhost gather_facts: false tasks: - name: Get all data ansible.builtin.debug: msg: - \u0026#34;storagename: {{ storage.0.name }}\u0026#34; - \u0026#34;datastorename: {{ storage.1.name }} with size: {{ storage.1.size }} and how many times {{ storage.1.amount }}\u0026#34; loop_control: loop_var: storage loop: \u0026#34;{{ query(\u0026#39;subelements\u0026#39;, configuration.storage.hosts, \u0026#39;datastores\u0026#39;) }}\u0026#34; - name: Get all nodes ansible.builtin.debug: msg: - \u0026#34;clustername: {{ cluster.0.name }}\u0026#34; - \u0026#34;hostname: {{ cluster.1.name }}\u0026#34; - \u0026#34;storagedata: {{ configuration.storage.hosts }}\u0026#34; loop_control: loop_var: cluster loop: \u0026#34;{{ query(\u0026#39;subelements\u0026#39;, configuration.cluster, \u0026#39;hosts\u0026#39;) }}\u0026#34; - name: Add virtual hostgroup ansible.builtin.add_host: groups: \u0026#39;virtual_hostgroup\u0026#39; name: \u0026#34;{{ cluster.1.name }}\u0026#34; cluster_name: \u0026#34;{{ cluster.0.name }}\u0026#34; storagedata: \u0026#34;{{ configuration.storage.hosts }}\u0026#34; loop_control: loop_var: cluster loop: \u0026#34;{{ query(\u0026#39;subelements\u0026#39;, configuration.cluster, \u0026#39;hosts\u0026#39;) }}\u0026#34; ## New play only targeting the host objects - name: Build storage for host hosts: virtual_hostgroup gather_facts: false tasks: - name: Print host ansible.builtin.debug: msg: - \u0026#34;{{ inventory_hostname }}\u0026#34; - \u0026#34;storages: {{ storagedata }}\u0026#34; This will give the output of:\nTASK [Print host] *********************************************************************************************************************************************************************************************************************** task path: demo.yaml:42 ok: [HostA] =\u0026gt; { \u0026#34;msg\u0026#34;: [ \u0026#34;HostA\u0026#34;, \u0026#34;storages: [{\u0026#39;name\u0026#39;: \u0026#39;StorageA\u0026#39;, \u0026#39;fqdn\u0026#39;: \u0026#39;storage-a.your.domain\u0026#39;, \u0026#39;datastores\u0026#39;: [{\u0026#39;name\u0026#39;: \u0026#39;datastoreA\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;1GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}, {\u0026#39;name\u0026#39;: \u0026#39;datastoreB\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;10GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}]}]\u0026#34; ] } ok: [HostB] =\u0026gt; { \u0026#34;msg\u0026#34;: [ \u0026#34;HostB\u0026#34;, \u0026#34;storages: [{\u0026#39;name\u0026#39;: \u0026#39;StorageA\u0026#39;, \u0026#39;fqdn\u0026#39;: \u0026#39;storage-a.your.domain\u0026#39;, \u0026#39;datastores\u0026#39;: [{\u0026#39;name\u0026#39;: \u0026#39;datastoreA\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;1GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}, {\u0026#39;name\u0026#39;: \u0026#39;datastoreB\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;10GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}]}]\u0026#34; ] } ok: [HostC] =\u0026gt; { \u0026#34;msg\u0026#34;: [ \u0026#34;HostC\u0026#34;, \u0026#34;storages: [{\u0026#39;name\u0026#39;: \u0026#39;StorageA\u0026#39;, \u0026#39;fqdn\u0026#39;: \u0026#39;storage-a.your.domain\u0026#39;, \u0026#39;datastores\u0026#39;: [{\u0026#39;name\u0026#39;: \u0026#39;datastoreA\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;1GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}, {\u0026#39;name\u0026#39;: \u0026#39;datastoreB\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;10GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}]}]\u0026#34; ] } ok: [HostD] =\u0026gt; { \u0026#34;msg\u0026#34;: [ \u0026#34;HostD\u0026#34;, \u0026#34;storages: [{\u0026#39;name\u0026#39;: \u0026#39;StorageA\u0026#39;, \u0026#39;fqdn\u0026#39;: \u0026#39;storage-a.your.domain\u0026#39;, \u0026#39;datastores\u0026#39;: [{\u0026#39;name\u0026#39;: \u0026#39;datastoreA\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;1GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}, {\u0026#39;name\u0026#39;: \u0026#39;datastoreB\u0026#39;, \u0026#39;size\u0026#39;: \u0026#39;10GB\u0026#39;, \u0026#39;amount\u0026#39;: 10}]}]\u0026#34; ] } Where you then can do a loop over the storagedata, or more complex data, but do it in parallel for each host (instead of sequentially per host). You can also only put in the information that is needed for this run and send them along as host_vars.\nOfcourse our setup is much much much more complex and has a lot more data, so it is not comparable at all. But, at least this gives an idea how you can target something like that. You could also in the above examples use the storage as primary key, and then do something with that when you loop over the hosts (the other way around then this example). It is all depending on what you need and how you need it. There might be 1000\u0026rsquo;s of hosts, 1000\u0026rsquo;s of datastores, 1000\u0026rsquo;s of whatever, and combining this wisely make you capable of doing things more in parallel instead of:\nloop over all hosts (*1000) loop over all datastores (*1000) loop over all whateverdata (*1000)\nyou do:\n1000*host loop over all datastores(*1000) loop over all whateverdata (*1000)\nyou can do the first run in parallel and take out a sequential wait of \u0026lsquo;1000\u0026rsquo;, and if every host iteration takes a second that saves you 1000 seconds or just shy of 17 minutes.\nI did not experiment with this, but you might be able to create secondary virtual hostgroups and in parallel attack the datastores as well (play with forks or serial to prevent overloading your system ;-), reducing the time even more.\nSummary For us this is a huge performance gain where we can target the things that take a lot of time, and combine the data into a virtual host object in a virtual hostgroup, and target that in a seperate play and do activities on them in parallel.\nAs always, if you have questions, please contact me.\n","permalink":"https://www.evilcoder.org/posts/2024-11-22-parallel-ansible-builds/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eAt work, we have a central configuration build, that we pick parts and\npieces from for our \u0026lsquo;deployments\u0026rsquo;. As you can read in my resume I am\nworking as a Virtualisation Engineer. There I am building most of the\ndeployment code. Our base structure looks at our vCenters as primary\nkey.\u003c/p\u003e\n\u003ch2 id=\"what-does-that-mean\"\u003eWhat does that mean?\u003c/h2\u003e\n\u003cp\u003eWell, if you take a vCenter as main key, then everything that builds\na vCenter (ESXi hosts, clusters, distributed switches, storage, etc.)\nare parts under the vCenter configuration. So you target a vCenter and\nnot specifically the hosts underneath a vCenter.\u003c/p\u003e","title":"Parallel runs of ansible 'stuff'."},{"content":"Introduction In my previous article, I was writing about Mermaid and that I wanted to experiment with generating documentation from the actual sources.\nI was recently able to focus properly on this and I think I had a blast breakthrough at least for myself.\nThe state now Last time I wrote that I was still investigating how and what, especially because of the compression that takes place when you do a graph TopDown(TD). There is a very simple solution to that, make it Left to Right!\nNow all I needed to add to the flavor is parse the bits and pieces of our configuration data. As mentioned I use Ansible for that, I generate the configuration I need and use a jinja template to actually parse the data and print whatever I need wherever I need it. I cannot share details ofcourse because that is work related, but I am sure you can use your imagination on your own defined configuration and what items you need to graph something.\nOh, and I tossed the subgraphs entirely. I did have a look at the architecture drawing that one can make, but that seems still a bit too difficult to get a proper model out of that.\nModel I mentioned the word model in the previous section on purpose, our configuration is repeatable, as any modern configuration should be when you use it a gazillion times. That also means that you can wrap it in a model.\nPydantic This is where pydantic comes in, a coworker of mine did a demo of this recently and I watched it afterwards (it was given on my day off, but we love to share internally so I could still view it later on). He is from a different group but they too have a configuration that is repeatable and perfectly fits a model.\nWhat is Pydantic I gave a talk about Pydantic recently myself and I used the phrase: It is a strict and quick validator of a given model, over a defined configuration.\nPerhaps that does not give the right merit to the tool, since FastAPI uses it to validate in and output on the fly with the tool, but for me this works perfectly.\nHow does it work Basically you stricly denote your configuration, and if you for example use yaml this has a certain layout. I will try to give an example a bit lower in the article. This layout and configuration items (yaml entries, like lists, dicts, a combination of them, etc.) if used well, are always matching a certain criteria.\nLike with ansible, something can be \u0026lsquo;state: present\u0026rsquo;, or \u0026lsquo;state: absent\u0026rsquo;. If you would wrap that in a Pydantic scheme, it will become: \u0026lsquo;state: Literal[\u0026ldquo;present\u0026rdquo;,\u0026ldquo;absent\u0026rdquo;]\u0026rsquo;. That means, that if the validation traverses your configuration, and finds a state keyword, it should match either present or absent. All other values are wrong and your validator should fail. You can also have the flag: \u0026rsquo;enabled: true\u0026rsquo;, or false. That reads in pydantic like: \u0026rsquo;enabled: bool\u0026rsquo;, since it is either true or false. if it is an integer (all digits), then you can state: \u0026lsquo;version: int\u0026rsquo; for example. You can use regular expressions as well, so if you know how a keyword\u0026rsquo;s value should look like, you can push it through a regular expression and validate that what you think must be defined is actually defined.\nbut, not everything is Required right ? That is true, so using the version as example, if that is an optional parameter in your configuration, you can define that like \u0026lsquo;version: int | None = None\u0026rsquo;, and it will be either if it exists an integer, or ignored (/optional) it is is not defined.\nSections So, not all configuration has just one layer, moest configuration has lists, dicts, a combination of them, can you validate that as well? Yes you can. You can point a certain part of your configuration, to an \u0026lsquo;upstream\u0026rsquo; validation. So instead of telling that \u0026lsquo;version: int\u0026rsquo; is what should happen, imagine it is a list. you can duplicate your codeblock and name it \u0026lsquo;VersionCheck\u0026rsquo; for example. Then you do this in the lower config item: \u0026lsquo;version: VersionCheck\u0026rsquo;. You have that new structure that is named VersionCheck ABOVE (bottom up thus) the normal validator, and define how the version contents should be. Perhaps it shows like this:\nversion: name: This is our version major: 1 minor: 0 patch: p0 That does not work if you tell \u0026lsquo;version: int\u0026rsquo; right? So imagine you created that new VersionCheck, you can then point version to that validation object, and do this:\nname: str major: int minor: int patch: str (or regex that ^p\\d is what you expect). The version tag is actually missing, because you \u0026lsquo;decent\u0026rsquo; into the version hierarchy when you reference it. That way you can loop over lists, dicts, etc pretty easily.\nHow did you implement this? I cheated a bit and read his model and adopted it to our configuration and we optimized it a bit to use it in our CI/CD stream. I use Ansible (yes again) to construct the configuration that I modified my colleague\u0026rsquo;s wrapper for and use that to parse the data.\nI cannot share details on how we did that at work, but if you are really curious I am considering writing a post on it, so that you can have an idea.\n","permalink":"https://www.evilcoder.org/posts/2024-10-01-mermaid-and-pydantic/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eIn my previous article, I was writing about Mermaid and that I wanted to\nexperiment with generating documentation from the actual sources.\u003c/p\u003e\n\u003cp\u003eI was recently able to focus properly on this and I think I had a blast\nbreakthrough at least for myself.\u003c/p\u003e\n\u003ch2 id=\"the-state-now\"\u003eThe state now\u003c/h2\u003e\n\u003cp\u003eLast time I wrote that I was still investigating how and what, especially\nbecause of the compression that takes place when you do a graph TopDown(TD).\nThere is a very simple solution to that, make it Left to Right!\u003c/p\u003e","title":"Mermaid and Pydantic"},{"content":"My gear Based on the gear page of Paul Grevink (just use google), I decided to publish my lab setup as well. Since it is based on Paul\u0026rsquo;s gear you see some similarities. We both have the Synology (218+), but admittedly we both had that already before we got to know each other.\nGateway My gateway running the KPN fiber line we have is a Unifi Cloud Gateway Fiber, hosting a couple of VLAN\u0026rsquo;s, IGMP Proxy for the KPN iTV and IPv6 as well. It is managed through the Unifi Network Application, that also drives the other Unifi switches and accesspoints. It is relatively easy to setup and manage and works so far without a charm. Which is way better then my experience with the RB5009 that ran the network before.\nSwitches My household uses a few switches, a unifi US-8 with POE out is hosting a few connections downstairs, and in my lab I currently have a TP Link Omada SG3428. As \u0026lsquo;intermediate\u0026rsquo; switches I use the cheap Ubiquity Flex Mini\u0026rsquo;s to passthrough the VLAN\u0026rsquo;s and one Ultra 60W, which powers one of the access points and one of the Flex Mini\u0026rsquo;s. I am considering replacing the Omada with an equal or faster Unifi switch at a certain point.\nAccesspoints There are three AP\u0026rsquo;s in the house, on the bottom floor, one at the bedroom floor and one on the top floor. Driven by the Ubiquiti U7-Pro.\nProxmox cluster My Proxmox cluster (as virtualisation engineer you must have this), consists of three Nuc13 PRO\u0026rsquo;s (RNUC13ANHI70002), with Samsung 990 PRO 1TB SSD\u0026rsquo;s and Kingston A400 480GB SSD\u0026rsquo;s and a small 64GB M2 local disk. Each machine has 64GB of Samsung Fury memory, and an additional NIC to seperate management and vm traffic.\nThe fourth node in this cluster is an almost equal machine. It is a \u0026lsquo;12\u0026rsquo; core i7 node, also running with 64GB of memory, a quad nic pci-e card, a 480GB Cache disk and a Samsung 990 PRO 1TB M2 SSD disk.\nThe cluster runs various Unix VM\u0026rsquo;s, that service my house hold. In the past I experimented with the VMware based setup, but sadly the new VMware Cloud Foundation setup no longer fits on my small nodes.\nNAS Since I cannot afford a SAN at home, I have a NAS that is driven by the Synology DS218+ having 2x4TB (raid1) storage available. Large VM\u0026rsquo;s land here, as well as servicing internal data and for backups.\nA second Synology DS1522+ serves the proxmox node and various VMware VM\u0026rsquo;s, I actively use the 4x1GB Ethernet connections in a dual LAG setup for NFS4 access (The disks are the bottleneck). This Synology has 2x4TB Raid1 storage and 2x3TB Raid1 storage available. Important data from the DS218+ is also mirrored on this node.\nOffline equipment The following hardware is what is still in \u0026lsquo;storage\u0026rsquo; but are offline:\namount description 1x Mikrotik CSR125 2x Unifi AC PRO 1x Mikrotik RB2011 1x Experiabox 12 1x Unifi Flex Mini 1x USB Nic ","permalink":"https://www.evilcoder.org/gear/","summary":"\u003ch2 id=\"my-gear\"\u003eMy gear\u003c/h2\u003e\n\u003cp\u003eBased on the gear page of Paul Grevink (just use google), I decided to publish my\nlab setup as well. Since it is based on Paul\u0026rsquo;s gear you see some similarities.\nWe both have the Synology (218+), but admittedly we both had that already before we got\nto know each other.\u003c/p\u003e\n\u003ch2 id=\"gateway\"\u003eGateway\u003c/h2\u003e\n\u003cp\u003eMy gateway running the KPN fiber line we have is a Unifi Cloud Gateway Fiber, hosting a couple\nof VLAN\u0026rsquo;s, IGMP Proxy for the KPN iTV and IPv6 as well. It is managed through the Unifi Network\nApplication, that also drives the other Unifi switches and accesspoints. It is relatively easy\nto setup and manage and works so far without a charm. Which is way better then my experience\nwith the RB5009 that ran the network before.\u003c/p\u003e","title":"Gear"},{"content":"Introduction In a previous article, I wrote something about Azure Devops and how I see it. I also wanted to play with automated documentation generation, which basically means that I am considering a template, and get bits and pieces from a configuration that I also use to deploy stuff, into a markdown file and present that on a \u0026lsquo;host\u0026rsquo;.\nOne of the things that \u0026lsquo;annoyed\u0026rsquo; me is that documentation always lags behind. Why? Because you need to modify it to remain relevant. See it is an instruction manual for your car, as long as the car itself remains the same, the manual remains relevant. But if you lets say, change the navigation unit in it, you either need to write a new part for that in the documentation (and add it as addendum or something), or reprint the instructions, else it will not fit anymore.\nThis requires manual labor. And a part of my job is to automate our deployments. Manual, automate, that does not compute right? It does in some context, but in this particular context it does not.\nI wanted to experiment with a system that would auto rebuild the documentation based on the actual sources. Since we run an Ansible environment, we have a codebase and configuration in place. Why not use them together and extract the bits and pieces we need?\nWhat I found initially Is that my predecessor, thanks Liam, already took care of a couple of those things. He wrote playbooks that extract a couple of configuration parameters and transform that into a temporary Markdown file and release that as artifact which gets used before publishing the wiki for example.\nThat gave me a nice boost, practically visible on what is possible, I decided to start the experiment there. But that is mainly text? What about graphics? Mermaid to the rescue!\nMermaid I saw, I think a year ago, a mentioning of mermaidjs thing from a colleague and also saw it in Joplin, and then at various other places. Back then I did not take particular note of it, as I was busy making automated deployments for our infrastructure.\nNow that that dust settled, and I was going to experiment with automated documentation I searched back in my mind, and recalled Mermaid. What does it do?\nWhat is mermaid? Mermaid is basically a diagramming and/or charting tool as you wish. With a simple set of instructions, you can create a diagram from text based input. Awesome right? To give a little example, in markdown you could specify this:\n```mermaid flowchart TD Topitem-\u0026gt;Secondlayer_1 Topitem-\u0026gt;Secondlayer_2 Secondlayer_1 --\u0026gt; Bottomlayer_1_1 Secondlayer_1 --\u0026gt; Bottomlayer_1_2 Secondlayer_2 --\u0026gt; Bottomlayer_2_1 Secondlayer_2 --\u0026gt; Bottomlayer_2_2 ``` flowchart TD Topitem --\u003e Secondlayer_1 Topitem --\u003e Secondlayer_2 Secondlayer_1 --\u003e Bottomlayer_1_1 Secondlayer_1 --\u003e Bottomlayer_1_2 Secondlayer_2 --\u003e Bottomlayer_2_1 Secondlayer_2 --\u003e Bottomlayer_2_2 This is ofcourse all very basic, the mermaid processor can work with \u0026lsquo;ids\u0026rsquo; that you can give a name, and use a large set of different graphical layouts as well as use markup to make clear what the id means. See here for more information on how this could work for you.\nAnd now what? Well, did I mention that I use \u0026lsquo;draw.io\u0026rsquo; when I want to make a drawing and/or diagram? Do you notice the overlap there ? Most often a drawing of how an enviroment looks like is nothing more then a Diagram with some markup. Machine A connects to Network A with IP A and B on both sides, it uses protocol XYZ between them to talk to eachother. They have a certain storage backend to storage host A, etc. That is just a flowchart presented differently. In my eyes then. Send me a message in case you disagree or see it differently.\nSo. We have this annoying thing that you need to update documentation when you change something, which most often is still required, but also some parts that you can extract from \u0026lsquo;your configuration\u0026rsquo; and update automatically. Like a drawing of the environment! And you know as I do, that drawings are most often updated last, and/or just forgotten.\nI started to experiment with a hardcoded diagram, that looks a bit like the above example but then more related to our environment.\nMy environment In a VMWare environment you normally have a vCenter, one or more clusters, and each cluster has one or more ESXi hosts. Those hosts have some sort of storage layer between them, specific network and vlans assigned to them etc. I combined that data in a diagram. And it looked like a nice start, but not really there yet. In a small scale setup this would make a network diagram automatically, and means you can automate the network drawings away for these infrastructure components (not limited to though!). But we dont have a small scale setup. We have quite a large estate running. This made it quickly difficult to read.\nFrom within DrawIO I would do something like the following to present this (strongly simplified, and condensed, you get the drill):\nSubgraphs? Because it became difficult to read if you add enough (valuable) data. I tried using sub graphs, so I created a vCenter on top, linked that to a box with hosts on it, linked it to a storage backend, linked to a network with a couple of vlans (distributed switch reference in case you are familiar).\nRoughly this looks like:\nflowchart TD; subgraph A_vCenter vCenter --\u003e A_Cluster vCenter --\u003e A_Storage vCenter --\u003e A_Network end subgraph A_Cluster Cluster_A --\u003e Host_A Cluster_A --\u003e Host_B Cluster_A --\u003e Host_C end subgraph A_Storage Storage_A --\u003e Volume_A Storage_A --\u003e Volume_B end subgraph A_Network Network_A --\u003e Vlan_A_123 Network_A --\u003e Vlan_B_456 end ```mermaid flowchart TD; subgraph A_vCenter vCenter --\u0026gt; A_Cluster vCenter --\u0026gt; A_Storage vCenter --\u0026gt; A_Network end subgraph A_Cluster Cluster_A --\u0026gt; Host_A Cluster_A --\u0026gt; Host_B Cluster_A --\u0026gt; Host_C end subgraph A_Storage Storage_A --\u0026gt; Volume_A Storage_A --\u0026gt; Volume_B end subgraph A_Network Network_A --\u0026gt; Vlan_A_123 Network_A --\u0026gt; Vlan_B_456 end ``` This is still readable in the above example, but if you have lets say 500 vlans connected to a cluster, just because you can, it comes difficult to read. If you have multiple clusters under a vCenter, it comes difficult to read, especially if you want to connect a cluster to A_Storage and another one to B_Storage and perhaps want to mention which interfaces they use as the line-text. To give an example, I tried drawing that below:\nflowchart TD; subgraph A_vCenter vCenter --\u003e A_Cluster vCenter --\u003e B_Cluster end subgraph A_Cluster Cluster_A --\u003e Host_A_A(Host_A_A) Cluster_A --\u003e Host_A_B(Host_A_B) Cluster_A --\u003e Host_A_C(Host_A_C) Cluster_A --\u003e A_Storage Cluster_A --\u003e A_Network end subgraph B_Cluster Cluster_B --\u003e Host_B_A Cluster_B --\u003e Host_B_B Cluster_B --\u003e Host_B_C Cluster_B --\u003e B_Storage Cluster_B --\u003e B_Network end subgraph A_Storage Storage_A --\u003e Volume_A_A[(Volume_A_A)] Storage_A --\u003e Volume_A_B[(Volume_A_B)] end subgraph B_Storage Storage_B --\u003e|HBA_B_A_1| Volume_B_A[(Volume_B_A)] Storage_B --\u003e|HBA_B_A_2| Volume_B_B[(Volume_B_B)] end subgraph A_Network Network_A --\u003e Vlan_A_A_123 Network_A --\u003e Vlan_A_B_456 end subgraph B_Network Network_B --\u003e|Iface_B_A_1| Vlan_B_A_1 Network_B --\u003e|Iface_B_A_2| Vlan_B_A_2 Network_B --\u003e|Iface_B_A_3| Vlan_B_A_3 Network_B --\u003e|Iface_B_A_4| Vlan_B_A_4 Network_B --\u003e|Iface_B_A_5| Vlan_B_A_5 Network_B --\u003e|Iface_B_A_456| Vlan_B_B_456 end ```mermaid flowchart TD; subgraph A_vCenter vCenter --\u0026gt; A_Cluster vCenter --\u0026gt; B_Cluster end subgraph A_Cluster Cluster_A --\u0026gt; Host_A_A(Host_A_A) Cluster_A --\u0026gt; Host_A_B(Host_A_B) Cluster_A --\u0026gt; Host_A_C(Host_A_C) Cluster_A --\u0026gt; A_Storage Cluster_A --\u0026gt; A_Network end subgraph B_Cluster Cluster_B --\u0026gt; Host_B_A Cluster_B --\u0026gt; Host_B_B Cluster_B --\u0026gt; Host_B_C Cluster_B --\u0026gt; B_Storage Cluster_B --\u0026gt; B_Network end subgraph A_Storage Storage_A --\u0026gt; Volume_A_A[(Volume_A_A)] Storage_A --\u0026gt; Volume_A_B[(Volume_A_B)] end subgraph B_Storage Storage_B --\u0026gt;|HBA_B_A_1| Volume_B_A[(Volume_B_A)] Storage_B --\u0026gt;|HBA_B_A_2| Volume_B_B[(Volume_B_B)] end subgraph A_Network Network_A --\u0026gt; Vlan_A_A_123 Network_A --\u0026gt; Vlan_A_B_456 end subgraph B_Network Network_B --\u0026gt;|Iface_B_A_1| Vlan_B_A_1 Network_B --\u0026gt;|Iface_B_A_2| Vlan_B_A_2 Network_B --\u0026gt;|Iface_B_A_3| Vlan_B_A_3 Network_B --\u0026gt;|Iface_B_A_4| Vlan_B_A_4 Network_B --\u0026gt;|Iface_B_A_5| Vlan_B_A_5 Network_B --\u0026gt;|Iface_B_A_456| Vlan_B_B_456 end ``` And this is still very minimal, in a regular setup, you have many more details that you might want to add. For now I am playing with the idea to generate multiple pages of data and zoom in om a specific set of data per page render. This doesn\u0026rsquo;t automate away the generation of always up to date environment drawings though.\nManually Also this is still all done manually, which is also not the idea. What I want to do is write an Ansible playbook, or re-use existing ones, and grab data from it and use the ID and Name of a parameter to form the documentation and create that before the artifacts are generated and used to generate the Wiki.\nBriljant, now that works we have finally a automated network drawing? No not really, the above examples are using the plain mermaid engine in for example GoHugo. See Here how to do that. But.. as more often Azure does a different thing. You can render a more limited subset of the Mermaid application within Azure, using the :::mermaid code block. This looks similar to the ``` blocks from Markdown, but isn\u0026rsquo;t entirely the same. Also it appears that you cannot use all features, nor multiple in a row. So the above examples will not be possible within Azure at this moment. Having said that, I also heard that the feature was much more limited before so it got traction anyway. I hope it will be build out to a much more full set of the regular mermaid implementation and that you can also use it similar to the examples on the web. It would make life much easier for a lot of people. Note that the above examples are, as far as I know and could test, usable, just limited to one per page.\nIf you have ideas about this, and/or would like to discuss this, you know where to find me (See the contact page on top).\n","permalink":"https://www.evilcoder.org/posts/2024-02-27-azure-devops-mermaid/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eIn a previous article, I wrote something about Azure Devops and how I see it.\nI also wanted to play with automated documentation generation, which basically\nmeans that I am considering a template, and get bits and pieces from a configuration\nthat I also use to deploy stuff, into a markdown file and present that on a \u0026lsquo;host\u0026rsquo;.\u003c/p\u003e\n\u003cp\u003eOne of the things that \u0026lsquo;annoyed\u0026rsquo; me is that documentation always lags behind.\nWhy? Because you need to modify it to remain relevant. See it is an instruction\nmanual for your car, as long as the car itself remains the same, the manual\nremains relevant. But if you lets say, change the navigation unit in it, you\neither need to write a new part for that in the documentation (and add it as\naddendum or something), or reprint the instructions, else it will not fit anymore.\u003c/p\u003e","title":"Azure Devops - Mermaid"},{"content":"Introduction Azure Devops is the \u0026lsquo;Github\u0026rsquo; of Microsoft basically. It contains container registries, pipelines, git repositories, an test framework, sprint/task boards and many more things. For many companies this is the defacto standard when it comes to doing DevOps based work.\nLately I have been working a lot with the Azure Devops Git/Pipeline options within Azure DevOps and must come to the conclusion, that a lot of the things we are using, are not easy to find in the courses online. For this I tried to combine the options that I use in this blog post. This blogpost will be periodically updated when I found out new things, so that this combines all my knowledge in this region. I will probably address the newer items in a seperated blog entry as well.\nArchitectural overview of the blogpost To make clear what all the different bits and pieces of Azure mean and to make it more visual, I created a little diagram: The Diagram was created with DrawIO, and reflects a few items that will be explained later. The yellow boxes are \u0026ldquo;stages\u0026rdquo;, they are filled with \u0026ldquo;blue\u0026rdquo; job boxes, a stage has one or more jobs, and subsequently the green boxes form steps, the lowest but most important parts in a job.\nThe Diagram also demonstrates the Azure cloud on the right handside, and \u0026ldquo;Artifact\u0026rdquo; stores, they are \u0026ldquo;one\u0026rdquo; within the cloud, but not to clobber the drawing, I made three of them. Effectively they are all the same!\nIn addition there are three agents referenced, two different types. Each stage / job runs on it\u0026rsquo;s own agent and can have different policies applied to them. We have the External Agents (EA) and Internal Agents (IA) in this drawing to make clear why Artifacts can be helpful.\nOfcourse all the information can be found on the internet, one of my main sources of information for Azure DevOps is Microsoft itself, and there are many useful external references as well. If you have comments, or want to discuss with me, see the contact menu item on top on how to contact me. Do note that not every option from Azure DevOps is mentioned here, there are far too many and I simply dont use everything, but I use a lot of the options available.\nPipelines What are pipelines actually? I always visualize the pipeline as a factory. Something gets in (Resources) is being processed on various levels, and something gets out (product). A good visualisation is likely a car factory. Some metals and required resources get in, the framework gets build, doors, tires, electronics, etc etc. are being added and optimized based on request, the car will be colored according to the customer demands, and in the end a test will be concluded and the product delivered to the end user.\nAll this, is largely done automatically inside the factory. If you explain that as \u0026lsquo;a pipeline\u0026rsquo;, then you have an understanding on what a pipeline does.\nYou can also call it an advanced job scheduler if you are an old school Unix guy like I am. In practise the pipeline orchestrates and enriches the jobs that need to run to form the end result. This could be schedule based or trigger based on a commit (checkin) to a repository, merge request, etc.\nExplanation about pipelines within Azure Like every vendor, the pipeline implementation all differs in some bits and pieces. If you use the above example in your mindset, you can most likely extract the relevant data from the product you are using. It could be that some of the options are named differently or used differently. I myself have used Gitlab CI/CD to automate the delivery of these webpages amongst, others, but also used Jenkins at my previous employer to generate periodic reports and in the past I used this to deliver these webpages. At work we use Azure to deliver CI/CD capabilities. All three have the same methodology but different implementations. Practise in real life makes a difference so do that in case you are familiar with one product and not yet with Azure.\nstages, jobs, steps What are stages You can see stages as as a set of \u0026ldquo;jobs\u0026rdquo; that form a coherent action. For example if you have an upgrade process in place, you will probably make a backup first, perform actions and update your configuration database and cleanup the backup (if succeeded).\nIf you would create a high level drawing out of that, you would seperate them in three steps likely: pre-actions, actions, post-actions. each of those actions is a stage.\nA stage can depend on another stage, imagine you need to prepare an image in a stage, and later us that image to continue your deployment. Selecting:\n- stage: \u0026#34;Name_earlier_stage\u0026#34; [....] - stage: \u0026#34;NameStage\u0026#34; dependsOn: \u0026#34;Name_earlier_stage\u0026#34; will make your second stage dependent on your first stage.\nWhat are jobs As you could have read in the previous part about stages, a stage is divided into a set of jobs. A job is a set of steps that form a logical entity as well. In the above example, a job could be to do pre-actions. Imagine that this job will do a couple of things (steps) before it finishes all these pre-actions. In Unix you would likely call this a script, that has several functions or actions taking place inside it.\nIn case of the pre-actions, you could do that into some smaller jobs like:\ndo validation tests make backup store backup update configuration database set machine into maintenance (for alerting etc) notify monitoring department or person on-call etc. All these jobs have several actions taking place inside these jobs, we will get to them in the next section.\nWhat are steps Steps are the lowest part inside the Azure Pipeline, while on the bottom level they are certainly one of the most important onces. Here actions actually take place, where the stages and jobs form and combine the logical actions taking place they do not hold actions themselves, the steps actually do something.\nImaging the above examples and we zoom in on the \u0026lsquo;make backup\u0026rsquo; job, that could have several steps:\nPrepare environment fetch keyvault secrets install required dependencies download company internal applications Login to the current machine to obtain token Use token to call backup API Download backup into staging directory Upload artifact into the artifact store (either pipeline artifact, or Azure artifact) Each step runs on the agent that runs the job, but could have different tasks or scripts that will be executed, like calling the API could be done with a Powershell script that is used internally, but fetching the keyvault secrets could be a task provided by Azure or company-wide.\nThe difference between tasks and scripts The difference is basically quite simple. A task is basically a wrapper around a script. If you have the Powershell@2 task, that takes several options, that abstracts away some parameters that you would normally have to write. A common one that you can find there is \u0026lsquo;workingDirectory\u0026rsquo;, which specifies where the script will be executed. It also allows you to either perform an inline script (where the input then is a variable for the task) or to reference an external script.\nA script does not have that wrapper around it, so you can customize it better and do whatever you need to do, but you need to do everything yourself. I find myself using a combination of both, but for AzureKeyVault it is very convienent to use the task.\nWhat are the @digit \u0026rsquo;s in tasks? The @0, @1, @2, etc are the different versions of the task, this way you can evolve your task over time with new options, but people can still use the defined version to keep up with that.\nPractical examples within the pipeline Logical operators Logical operators are often found in languages, They allow you to follow english like decisions \u0026lsquo;if something is true, then do action A, else if something else is true, do action B, and if nothing is true, well then do action Z\u0026rsquo;\nIn most languages that is quite simple:\nif value in string: For Azure I find it a bit harder, you have operators to \u0026lsquo;wrap\u0026rsquo; a condition in. Like \u0026rsquo;eq\u0026rsquo;, \u0026rsquo;ne\u0026rsquo;, \u0026lsquo;contains\u0026rsquo;, \u0026lsquo;containsValue\u0026rsquo;, \u0026lsquo;and\u0026rsquo;, \u0026lsquo;or\u0026rsquo;, and many more.\nThey are used like this, where both perform similar validations. In this example both checks need to be true. The first check, is parameters.key equal to yourValue, and the second check does \u0026lsquo;key2\u0026rsquo; contain the string \u0026lsquo;yourOtherValue\u0026rsquo;. If so, the function is true and will perform what you combine it with.\nand(eq(parameters.key, \u0026#39;yourValue\u0026#39;), contains(parameters[\u0026#39;key2\u0026#39;], \u0026#39;yourOtherValue\u0026#39;)) I find this a bit harder to read especially when the and/or operators take a few additional arguments that you want to test against. It can become unreadable quite quickly, so write out the thing you want to do first, so that you have it \u0026lsquo;drawn\u0026rsquo; or \u0026lsquo;designed\u0026rsquo;.\nAzure calls them \u0026ldquo;Functions\u0026rdquo;.\nIf then else example Within Azure you can also do an if then else ofcourse. But you need to be aware of the indentation.\njob: task: AzureKeyvault@0 parameters: ${{ if eq([\u0026#39;parameters.test\u0026#39;], \u0026#39;true\u0026#39;) }}: vault: test ${{ else }}: vault: something else If value contains Likewise using the if / else construct, you can also see whether a part of a text is included in a parameter. This can be useful if you have a naming conventation or unique indicator that you can use to define certain parameters.\njob: task: AzureKeyvault@0 parameters: ${{ if contains([\u0026#39;parameters.test\u0026#39;], \u0026#39;your_name\u0026#39;) }}: set_value: to what you need Loop through a list parameters: - name: loop_list: type: string displayName: A nice name for the loop_list values: - a - b - c job: ${{ each value in parameters.loop_list }}: task: AzureKeyvault@0 parameters: key: value Templating Ofcourse, you will probably see that the things you need to do to perform tasks is having overlapping sections between various pipelines. If you always need to fetch your AzureKeyVault secrets with certain custom stuff used, then you need to do that for every pipeline again and again\u0026hellip; right?\nLike with regular coding you can \u0026rsquo;template\u0026rsquo; these things. In Azure terminology a template is a file either from the local repository, or from a remote \u0026lsquo;resource\u0026rsquo; repository that contains a repeatable action that you can include in your pipeline.\nIf you work for a bigger organisation, or use sections that you need to re-use between pipelines, this saves a lot of time and maintenance as well. A bugfix or new feature is propagated to each pipeline instantly and effective the next time you run it. Ofcourse, on the negative side, a bug is also propagated to all pipelines using the code.. so test it well!\nInternal resource With internal resource I mean specifying a resource that is within the current scope/repository like:\njobs: - template: localpath/to/template.yml External resources One of the options to include a file is by using a \u0026ldquo;resource\u0026rdquo;. This goes as follows:\nresources: repositories: - repository: myrepo type: git name: yourorg/myrepo Later in the code you can reference this by:\njobs: - template: path/to/template.yml@myrepo Where myrepo specifies the name of the repository as named at \u0026lsquo;- repository: myrepo\u0026rsquo; You can ofcourse include more repositories and reference them accordingly.\nScheduling and/or triggers Scheduling and triggers define when a pipeline gets executed if not done manually.\nIf you want to use a cron-alike way use:\nschedules: - cron: \u0026#34;59 23 * * *\u0026#34; displayName: Just before midnight branches: include: - yourbranch always: true where the pipeline will run one minute before midnight using the cron definition type of a schedule. It will only run for \u0026lsquo;yourbranch\u0026rsquo; in this case.\nIf you do not want to use this, either remove: schedule: and/or set:\ntrigger: none to disable trigger based execution of your code.\nVariables and Parameters One of the dynamic features that most CI/CD tools feature is the variables. Azure also uses them and extends this by also using parameters.\nWhat are variables Variables are oneline objects that specify an identifier that you can re-use throughout the entire pipeline. They can be changed and/or exported from a task or job output so that you can use this in a different job as well (which runs on its \u0026lsquo;own\u0026rsquo; agent and potentially on a very different machine, unaware of the original job / agent).\nFor example:\nvariables: - name: my_project_name value: \u0026#34;www.evilcoder.org\u0026#34; Where you can use this throughout the pipeline with $(my_project_name), which will print \u0026lsquo;www.evilcoder.org\u0026rsquo; in this example.\nBut, the $() is common for scripting languages like \u0026lsquo;shell\u0026rsquo;, yet there are other ways to reference a variable as well. This is called a runtime variable, it will be (hopefully) filled before it is ran, and the variable is available at that moment. (task level)\nAzure also supports so called Template expressions, which are changed at compile time of the pipeline and cannot be changed afterwards. An example is ${{ variables.my_project_name }}, which has a different syntax but yet it will still print \u0026lsquo;www.evilcoder.org\u0026rsquo; in this example.\nAnother way to reference them is by using runtime expressions, which are processed at runtime, (pipeline level). They are referred like: $[variables.my_project_name] in this case. You will see this being used when doing comparisons or seeing whether a variable contains a certain string.\nWhat are parameters Parameters are more highly sophisticated variables in a nutshell. You can specify the type of the object, provide values for them (if the list is long enough you get a dropdown), etc. What I notice when using these parameters, when you run a pipeline manually, you get a popup asking about the details of the parameter and unless you have a default already, you need to specify them or select them from the menu.\nThey are converted just before the pipeline runs and thus are static. If you need more dynamical items then variables are much more flexible. However, I prefer the use of parameters to have a bigger amount of control over the type.\nAn example parameter:\nparameters: - name: Webname displayName: Hostname of the webservice type: string default: \u0026#34;www.evilcoder.org\u0026#34; values: - \u0026#34;https://www.remkolodder.nl\u0026#34; - \u0026#34;https://www.evilcoder.org\u0026#34; - \u0026#34;https://www.elvandar.org\u0026#34; This will give you a selectable for the Webname parameter, which you can use later on in a task like this:\n- script: | hugo --baseURL ${{ parameters.Webname }} which will put the baseurl for a hugo site to the site specified. Note that I used the longer version of the argument for hugo to keep it readable.\nDifference between variables and parameters As mentioned in the above text, a variable can be changed on the fly when needed, or exported in a job to be re-used in a subsequent job. Parameters are more static and only converted just before runtime of the pipeline. That makes then immutable during the run, but you have higher grip on what kind of parameter you expect and even limit the amount of input it can handle (see the values above).\nSimultaneous execution of something You can run several items in parallel, for that you need a strategy called: \u0026ldquo;matrix\u0026rdquo;.\nWhat this does is it uses the keys in the matrix (the first item that you select on the same indent), and loop through them with the key/values in it that you can use in your task or script.\nparameters: - name: matrix_you_will_be_using type: object default: name_1: variable_a: \u0026#34;stringA\u0026#34; variable_b: \u0026#34;stringB\u0026#34; name_2: variable_a: \u0026#34;stringAA\u0026#34; variable_b: \u0026#34;stringBB\u0026#34; [...] jobs: - job: strategy: matrix: ${{ parameters.matrix_you_will_be_using }} steps: - script: | echo $(variable_a) echo $(variable_b) Where it will do the loop two times, one for name_1 and one for name_2, and print the variables: stringA and stringB for the name_1 print, and stringAA and stringBB for the name_2 print. This can run concurrently, so that if you need to do a certain test a lot of times using the same input variables but with different logic included, you can do that en-masse. You can limit the amount of concurrent running items with : \u0026lsquo;maxParallel\u0026rsquo;.\nThe yaml structure defined above, starts each iteration with the name_ keys, and then you can specify them with just their short name, if you compare that by running a loop through the object with \u0026ldquo;each\u0026rdquo; for example:\n- ${{ each ShortName in parameters.matrix_you_will_be_using }}: - script: | echo ${{ ShortName.variable_a }} echo ${{ ShortName.variable_b }} which looks much different then the stategy above.\nArtifacts As you can see in my example image on top, there are a couple of Artifact stores mentioned, and some airgap lines. These airgap lines suggest that you cannot access the data from one stage to another, and with the difference in extern and internal agents, you could be limited by internal policies as well. Imagine that you cannot fetch anything from the internet via the Internal Agents (IA), then you always need to fetch a resource from the External Agents (EA). You could see the difference between them as airgap.\nIf you allow your Agents to access the Azure Cloud though, then you can use the Azure Artifact resource as airgap proxy. You can see that as a huge store where you can upload \u0026lsquo;artifacts\u0026rsquo;, or as I reference them as \u0026lsquo;zip\u0026rsquo; files, where you upload a resource from the EA, and download them ater on the IA. In this example it could be used to create a pre-actions report, upload that via Azure Artifacts and download it in the next stage on the IA to do something with that pre-actions report.\nArtifact Types There are two types of artifacts, that I am currently familiar with, those are the Pipeline Artifacts and the Azure Artifacts,\nPipeline artifact This type of artifact only survives a pipeline lifetime. They are not billed either, and artifacts generated during the run will be there until the pipeline results are deleted. This means that if your retention period for pipelines are 10 days, then after 10 days the artifacts included in that pipeline is removed as well.\nThe usage is quite simple, An important variable within Azure DevOps is the: $(Build.ArtifactStagingDirectory) variable. This variable marks the staging directory where \u0026lsquo;objects\u0026rsquo; are stored temporary before being placed in a artifact or whatever you are going to do with it. Normally relevant files are copied there using a \u0026lsquo;CopyFiles\u0026rsquo; task and then put into an artifact like:\n- task: PublishPipelineArtifact@1 displayName: \u0026#39;Publish\u0026#39; inputs: targetPath: $(Build.ArtifactStagingDirectory)/webroot/** ${{ if eq(variables[\u0026#39;Build.SourceBranchName\u0026#39;], \u0026#39;main\u0026#39;) }}: artifactName: \u0026#39;webroot-prod\u0026#39; ${{ else }}: artifactName: \u0026#39;webroot-test\u0026#39; artifactType: \u0026#39;pipeline\u0026#39; Afterwards in subsequent stages or jobs, you can download this webroot-prod or webroot-test artifact (which is indeed the name of the artifact)\n- task: DownloadPipelineArtifact@2 inputs: artifactName: \u0026#39;webroot-test\u0026#39; targetPath: $(Build.SourcesDirectory)/webroot which will fetch the latest version of the webroot(-prod | -test) artifact inside the pipeline. You can download resources from other branches or pipelines as well, see the reference at Microsoft\nAzure Artifacts I use these kind of artifacts a lot, they are billed though so you should consider your storage options and funding before using this.\nAzure Artifacts use incrementing numbers to identify version numbers. You can mention a specific version id yourself ofcourse, which could be handy if you \u0026lsquo;git tag\u0026rsquo; your code as well to point to certain release, and create an artifact out of that with the same version. Unlike the \u0026lsquo;PipelineArtifact\u0026rsquo; tasks, that well quite well describe the task at hand, Azure Artifacts use the UniversalPackages task. By default the Artifactstagingdirectory is used to upload all files under it to the Azure Artifact store. Note well, if you use the GUI to generate the task for you, you can select the feed etc. This will be converted into it\u0026rsquo;s UUID if transformed into YAML. That might be quite hard to read, so I suggest converting them to your actual feedname and packagenames instead.\nAn example on how to create an Azure Artifact with your own tags (Assume this version is in the $(git-tag-version) variable:\n- task: UniversalPackages@0 displayName: Create latest webroot artifact inputs: command: publish publishDirectory: \u0026#39;$(Build.ArtifactStagingDirectory)\u0026#39; vstsFeedPublish: \u0026#39;Yourproject/your-generated-feed\u0026#39; vstsFeedPackagePublish: \u0026#39;webroot\u0026#39; versionOption: custom versionPublish: \u0026#39;$(git-tag-version)\u0026#39; packagePublishDescription: \u0026#39;Webroot for website\u0026#39; and later on you can download that in a different job and/or stage (make sure you depend on this if you need to have the latest version generated inside your pipeline, else you might end up downloading a different version, or the version you want to have cannot be found. To download the latest version specify \u0026lsquo;*\u0026rsquo;. Else specify the exact version number.\n- task: UniversalPackages@0 displayName: \u0026#39;Download latest webroot version\u0026#39; inputs: command: download vstsFeed: \u0026#39;Yourproject/your-generated-feed\u0026#39; vstsFeedPackage: \u0026#39;webroot\u0026#39; vstsPackageVersion: \u0026#39;$(git-tag-version)\u0026#39; downloadDirectory: \u0026#39;$(Build.SourcesDirectory)\\webroot\u0026#39; ","permalink":"https://www.evilcoder.org/posts/2023-04-02-azure-devops/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eAzure Devops is the \u0026lsquo;Github\u0026rsquo; of Microsoft basically. It contains container registries,\npipelines, git repositories, an test framework, sprint/task boards and many more things.\nFor many companies this is the defacto standard when it comes to doing DevOps based work.\u003c/p\u003e\n\u003cp\u003eLately I have been working a lot with the Azure Devops Git/Pipeline options within Azure\nDevOps and must come to the conclusion, that a lot of the things we are using, are not\neasy to find in the courses online. For this I tried to combine the options that I use\nin this blog post. This blogpost will be periodically updated when I found out new\nthings, so that this combines all my knowledge in this region. I will probably address\nthe newer items in a seperated blog entry as well.\u003c/p\u003e","title":"Azure Devops"},{"content":"Introduction Working at a financial institute ofcourse requires the use of using secure access to usernames and passwords and the like. For some period that had been out of reach for home consumers or smaller companies, but now with Hashicorp Vault, Ansible Vault, or smart access to applications like 1Password gives the opportunity to use these kind of smart access yourself.\nScope This specific blog post, will focus on how I am using 1Password to access external applications, using their integration. If you use 1Password, you should have already setup the integration yourself. See this link to get started with setting up your own integration to one of your vaults.\nI will however describe what I did -after setting up the above- to integrate that, and I will demonstrate that with an example script.\nDefinitions / explanations 1Password To recap, if you are not familiar, 1Password started as a standalone application, that locally stored your usernames and passwords, and grew various options to automatically submit your credentials to a site, after authenticating to the app, etc. You can use it alone, but also share it with family members for a shy amount per month, or even company wide in case you are interested. Recently it grew a cloud only vault, that give the opportunity to centrally store application credentials and can be fetched from everywhere.\nThat does suggest a potential larger attack vector, then having a vault locally, but you cannot access the data otherwise. You should make the tradeoff for yourself.\nKeyvaults The concept behind keyvaults, is that is generally a \u0026lsquo;store secret data\u0026rsquo; in a safe (aka vault). In some cases you can put something in and cannot (easily) get the information back. That is, if there are fine grained access controls in place that you can can prevent regular users from \u0026lsquo;fetching\u0026rsquo; data from the vault, but only have automated access to that same vault. This makes it needless to store usernames and passwords in env files or in the code itself, you can retrieve it when needed and it will get logged etc.\nA keyvault, nowadays, can also be queried by an API, that works for the cloud providers, and also 1Password has that option. You define a certain access method for your code and/or intermediate applications (like the 1Password-Connect service), and you can programatically query an endpoint that could result in a username, password, url, and what whatever you configure within the entry in the vault.\nThe example The example below, is what I use daily to check whether the php group has a new release and released a new docker image so that I can rebuild my own customized web containers. I used to just fetch the tags and parsed them, but with the requirement to use api-v2 you also need authentication, and this script provides that easily with the help of 1Password.\nThe example uses the 1password connect sdk, a toolkit that talks to the 1Password Connect application that I run in a docker container.\nSo, here comes the script, together with 1Password integration:\n#!/usr/bin/env python3 import requests, json, os, sys from dotenv import load_dotenv load_dotenv() ###### CONFIGURATION ITEM ########## # Password module, either 1password or file # Note that the .env file is required in both # cases, but in one case we use it to connect to # the 1password upstream, else we use the direct # configuration from the file. password_module = \u0026#34;1password\u0026#34; if password_module == \u0026#34;1password\u0026#34;: # Import modules required for 1password import onepasswordconnectsdk from onepasswordconnectsdk.client import ( Client, new_client_from_environment, new_client ) # Start a new 1password controller from environment client: Client = new_client_from_environment() # Setup the connection with the previous client section. # We look for a specific \u0026#39;name of your item\u0026#39; in your application vault. # You can ofcourse set that in a variable or use it as parameter/cli param. config = onepasswordconnectsdk.load_dict(client, { \u0026#34;username\u0026#34;: { \u0026#34;opitem\u0026#34;: \u0026#34;name of your item\u0026#34;, \u0026#34;opfield\u0026#34;: \u0026#34;.username\u0026#34;, }, \u0026#34;password\u0026#34;: { \u0026#34;opitem\u0026#34;: \u0026#34;name of your item\u0026#34;, \u0026#34;opfield\u0026#34;: \u0026#34;.password\u0026#34;, }, \u0026#34;url\u0026#34;: { \u0026#34;opitem\u0026#34;: \u0026#34;name of your item\u0026#34;, \u0026#34;opfield\u0026#34;: \u0026#34;sitedata.url\u0026#34;, }, }) # Items fetched from 1password record and assigned to variable base_url = config[\u0026#34;url\u0026#34;] username = config[\u0026#34;username\u0026#34;] password = config[\u0026#34;password\u0026#34;] # End of 1Password information elif password_module == \u0026#34;file\u0026#34;: # Items configured in .env file base_url = os.getenv(\u0026#39;BASE_URL\u0026#39;) username = os.getenv(\u0026#39;USERNAME\u0026#39;) password = os.getenv(\u0026#39;PASSWORD\u0026#39;) else: print(\u0026#34;You need to configure a valid backend, we cannot proceed like this!\\n\u0026#34;) sys.exit(\u0026#34;Please configure the right settings!\u0026#34;) image = sys.argv[1] version = sys.argv[2] # Set image login and tags url login_url = base_url + \u0026#39;/v2/users/login\u0026#39; tags_url = base_url + \u0026#39;/v2/repositories/library/\u0026#39; + image + \u0026#39;/tags/?page_size=10000\u0026#39; ###### END CONFIGURATION ITEM ########## with requests.Session() as session: post = session.post(login_url, json={\u0026#34;username\u0026#34;: username, \u0026#34;password\u0026#34;: password}) # Variable setting # Set token to token. token = post.json()[\u0026#39;token\u0026#39;] # Add the authentication token to the headers. headers[\u0026#39;Authorization\u0026#39;] = \u0026#34;JWT {}\u0026#34;.format(token) # create the tags list and fetch the proper url tags_list = session.get(tags_url, headers=headers) # refactor the response under tags_list as json and store it in json_response json_response = tags_list.json() # We only need the names in this purpose, loop through the results and match the # version we specified on the commmandline and then print it if there is a match. for name in json_response[\u0026#39;results\u0026#39;]: if version in name[\u0026#34;name\u0026#34;]: print(name[\u0026#34;name\u0026#34;]) ","permalink":"https://www.evilcoder.org/posts/2022-11-11-1password-dockerhub-tags/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eWorking at a financial institute ofcourse requires the use of using secure access to usernames and passwords and the like. For some period that had been out of reach for home consumers or smaller companies, but now with Hashicorp Vault, Ansible Vault, or smart access to applications like 1Password gives the opportunity to use these kind of smart access yourself.\u003c/p\u003e\n\u003ch2 id=\"scope\"\u003eScope\u003c/h2\u003e\n\u003cp\u003eThis specific blog post, will focus on how I am using 1Password to access external applications, using their integration. If you use 1Password, you should have already setup the integration yourself.\nSee \u003ca href=\"https://developer.1password.com/docs/connect/get-started/\"\u003ethis link\u003c/a\u003e to get started with setting up your own integration to one of your vaults.\u003c/p\u003e","title":"Using 1password vault to access third party site(s) script-wise"},{"content":" github.com/remkolodder evilcoder.org webmaster@elvandar.org ","permalink":"https://www.evilcoder.org/contact/","summary":"\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/remkolodder\"\u003egithub.com/remkolodder\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://evilcoder.org\"\u003eevilcoder.org\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"mailto:webmaster@elvandar.org\"\u003ewebmaster@elvandar.org\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","title":"Contact"},{"content":"Introduction I started writing this blog post back in May, where I (a bit late) wanted to tell where I went after working for Snow/Sue for 15 years. But I got distracted by other things, amongst them a lot of cool things at work.. so perhaps I should finally finish the entry.\nA year ago (wow, time flies!) I rejoined ING. I was looking for a job where I could continue my Coaching experience and continue to learn, and of course using my strong technical background where possible.\nDuring the COVID period, the occasional band \u0026ldquo;The Streamers\u0026rdquo;, was on TV, together with a load of ING advertisements. ING sponsored the event and that triggered me to have a look at their jobs.\nThe role that I found there, interested me a lot, and I decided to just give it a try. It was one of the best decisions of \u0026lsquo;2021\u0026rsquo; :-).\nInterview Some old friends still work at ING, so I asked one of them (Hi Marc), what the role was about and what I could expect. Since it involved a lot of coaching and using my technical background (in a bit of a different area then normally), I asked the recruiters to do an interview.\nI had an introduction meeting with one of the recruiters, and then with Ad and Edith, and finally with my manager Jos. All conversations where fun conversations and quickly after each other. I got an offer and decided to sign for the offer. We together decided that I would start at Sept 1st 2021.\nStarting So, starting in Covid as Chapter Lead, for a company that I still knew, but changed heavily.. that\u0026rsquo;s kinda.. interesting. My first day at the office I met Jos in Amsterdam, and together we got my Laptop and he showed me how to set it up and where to find things. I decided to make appointments with everyone from my team and slowly introduce myself in those meetings, how else can you do that when nobody is allowed to go to the office?\nI think that was a good call, I quickly made contact with everyone, even with the people not in my direct team and got to learn them a bit. Ofcourse there was more work to do there from my angle, but a start was made. Being a Chapter Lead means HR related activities within ING, so a lot of contact with the people, but also it is a combination role with an engineering part. I did some things with Virtualisation and Storage in the past, but not that heavily as all my fellow technical engineers are doing. Quite a task for me to get up to speed with them and how do I respond to the question: What are you going to focus on and bring to the team Remko? I needed to find a way for quite some time, next to learning everyone and learning ING.\nManagement As mentioned I am the Chapter Lead, which means I have HR responsibilities, amongst them helping my team members with their individual development plans, help setting their goals, adressing and resolving issues they come across with, time-off, salaries, health (up to my allowed involvement level). I also have periodic one-on-one conversations with each member of the team. I try to be an open and reachable manager, honest and fair. If I can help, one should let me know, but if I need to make a harder decision, I will always to that. After all, that is part of my job.\nTogether with a fellow Chapter Lead and the Product Owner, we maintain the virtualisation and storage infrastructure for ING, divided in three teams.\nTechnical Next to the managerial part, I also have a technical role. I am an active member of the virtualisation team, where we maintain and manage the Private Cloud Infrastructure for ING. As mentioned I needed to find a way on how to contribute to the team the best as I can, and I think I found a way there with the automation using Ansible. At the moment of writing I am the main person developing (admittedly on the work done by Liam and Leon) the automation roll out further and making it stable. I am also one of the persons in the team that create our Azure DevOps Pipelines, building the CI/CD street for these automations.\nUsing the Agile approach we do that in a 2 week sprint cycle, and everytime we get more into a stable situation.\nOne of the things that I really like is that we share our knowledge when we can and offer demo\u0026rsquo;s etc to our team but also to the rest of the Tribe or wider if needed. Everyone is always willing to help and that makes it the perfect allround team for me.\nYour amibition? My personal amibition is to grow more as Coach, but also get to learn the ropes of the Virtualisation paltform more and more. I am adding value to the team already, but I have a strong drive to do more or be able to do more (if time permits :-)).\nIf you have Questions or want to be WorkING at ING as well, let me know and I will see what I can do for you!\n","permalink":"https://www.evilcoder.org/posts/2022-05-17-working/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eI started writing this blog post back in May, where I (a bit late) wanted to tell where\nI went after working for Snow/Sue for 15 years. But I got distracted by other things,\namongst them a lot of cool things at work.. so perhaps I should finally finish the entry.\u003c/p\u003e\n\u003cp\u003eA year ago (wow, time flies!) I rejoined ING. I was looking for a job where I could continue\nmy Coaching experience and continue to learn, and of course using my strong technical background\nwhere possible.\u003c/p\u003e","title":"WorkING"},{"content":"Introduction During my time at Snow or later Sue, I was allowed to drive various types of cars. I mainly drove Volkswagen\u0026rsquo;s, from the Golf to the Golf Variant, but also an Audi, a Skoda, a Prius (jikes) and in the end I was allowed to drive a Tesla Model3.\nAs a computernerd that is an awesome thing to drive, it has all kind of connectivity options, you can monitoy it from remote with applications like teslamate and store essential data from it.\nAnd now? After saying goodbye to Sue, I was ofcourse also returning my car, then called Mars as Bram (one of my kids) named him. After so many moons of driving a car, and almost a year with an all electric vehicle that was something to learn to cope with. But so far I am managing fine, I dont owe a car at all and where needed I can borrow one most of the time. We bike a lot more nowadays and that works out fine. My new employer has it\u0026rsquo;s buildings easily accessible by public transport, so nothing to worry there as well.\nMap So, driving such a car, in my then role as Technical Field Manager, I drove, or should have driven, through the entire country. Since Covid, that never happened as before Covid. Still I drove around the country quite a bit and luckily teslamate drawed me a map. The first period isn\u0026rsquo;t in there, I didnt have the application back then but for approximately a year, you can see my driving in the country.\n","permalink":"https://www.evilcoder.org/posts/2021-12-20-tesla-model3/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eDuring my time at Snow or later Sue, I was allowed to drive various types of cars. I mainly drove\nVolkswagen\u0026rsquo;s, from the Golf to the Golf Variant, but also an Audi, a Skoda, a Prius (jikes) and in\nthe end I was allowed to drive a Tesla Model3.\u003c/p\u003e\n\u003cp\u003eAs a computernerd that is an awesome thing to drive, it has all kind of connectivity options, you can\nmonitoy it from remote with applications like teslamate and store essential data from it.\u003c/p\u003e","title":"Waiving goodbye to my Tesla Model3"},{"content":"Introduction Since some time our kids to to daycare, and the occassionaly place photos on the secured environment. It\u0026rsquo;s a bit problematic to download the photos. On the phone it feels a bit weird how that works and online you can only fetch one photo at at time. But since they are there for a longer period already, that would mean manually download a lot of photos.\nSo, I decided to write a little python wrapper. Using proxyman in between I analyzed the requests and contents and was able to determine how the login procedure works. We first POST a request to the LOGIN URL and bind a session to the \u0026lsquo;requests\u0026rsquo; structure , which we will use later to POST to the album URL (you need to do that first, it seems that the PHPSESSID is then placed on the allowed list or something), which will result in a list of available photos. We can then fetch the items by iterating over the dict and nested list and download the images. We do our best to not overload the server so we delay every other request like a regular browser would also do.\nIf you are not on the session allow list or do not have the Cookie, then you will get images of \u0026lsquo;378\u0026rsquo; bytes big, which looks like an unallowed image. From limited testing you are not able to download photos which is not in your list and/or belong to you. It is not a guarantee though!\nOfcourse there are probably easier or better methods. Please let me know in that case, so that I can learn from it ;-)\nNote that you need the loadenv plugin to get variables from the environment. They are recorded as \u0026ldquo;VARIABLE=VALUE\u0026rdquo; items and work like BASH.\nThe Python Code used below is the code that I used to obtain the currently available photos. Adopt as you need it.\n#!/usr/bin/env python3 import requests, json, time, os from dotenv import load_dotenv load_dotenv() # URL Contructs base_url = os.getenv(\u0026#39;BASE_URL\u0026#39;) image_url = base_url + \u0026#39;/ouder/media/download/media/\u0026#39; login_url = base_url + \u0026#39;/login/login\u0026#39; album_url = base_url + \u0026#39;/ouder/fotoalbum/standaardalbum\u0026#39; # private settings from environment file username = os.getenv(\u0026#39;USERNAME\u0026#39;) password = os.getenv(\u0026#39;PASSWORD\u0026#39;) photo_path = os.getenv(\u0026#39;PHOTO_PATH\u0026#39;) # Range between which period we should search for photos year_start = yyyy month_start = mm year_end = yyyy month_end = mm # End of range selection # Login needs a username, password and role # role: 7 = login as parent(/ouder) params = { \u0026#39;username\u0026#39;: username, \u0026#39;password\u0026#39;: password, \u0026#39;role\u0026#39;: \u0026#39;7\u0026#39; } # Pretend we are a real browser, if there are checks for user agent, we can work around this. headers = { \u0026#39;User-Agent\u0026#39;: \u0026#34;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15\u0026#34; } # assign the start year and month to the work variables. year = year_start month = month_start with requests.Session() as session: post = session.post(login_url, headers=headers, data=params) # Variable setting # use the PHPSESSID header and assign it the set-Cookie parameter. headers[\u0026#34;PHPSESSID\u0026#34;] = post.headers[\u0026#39;set-Cookie\u0026#39;] # Loop through year/month cycle (between range). while True: # Folder definition which will be reset every month and year and seen whether the folder exists. If not create it. folder = photo_path + str(year) + \u0026#39;/\u0026#39; + str(month) + \u0026#39;/\u0026#39; if not os.path.exists(folder): os.makedirs(folder) # assign the year month dict, this will be used to post to the correct page where we can fetch the resulting image list. year_month = { \u0026#34;year\u0026#34;: year, \u0026#34;month\u0026#34;: month } # Post the year month dict, in response we will receive a list of available photoids. post = session.post(album_url, headers=headers, data=year_month) # The results of the album post is a json structure with \u0026#34;FOTOS\u0026#34; and a list of the available photos. json_response = post.json() # Loop through the list in the json FOTOS structure for photo in json_response[\u0026#34;FOTOS\u0026#34;]: # Construct the photo url photo_source_big = image_url + photo # Obtain the photo r = session.get(photo_source_big, headers=headers) # For every result, write the file to the structure (yyyy/mm) with the obtained name. with open(folder + photo + \u0026#39;.jpg\u0026#39;, \u0026#39;wb\u0026#39;) as handler: handler.write(r.content) # Sleep 10 seconds between requests to not overload the service. time.sleep(10) # After doing the loop if we match the end year and month, break out. if year == year_end and month == month_end: break # If the month is January, decrement the year and reset the month to 12. if month == 1: year = year - 1 month = 12 else: # If the month is not 1, we can decrement it with 1. month = month - 1 # end ","permalink":"https://www.evilcoder.org/posts/2021-10-17-flexweb-image-backup-script/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eSince some time our kids to to daycare, and the occassionaly place photos on the secured environment.\nIt\u0026rsquo;s a bit problematic to download the photos. On the phone it feels a bit weird how that works and online you can\nonly fetch one photo at at time. But since they are there for a longer period already, that would mean manually\ndownload a lot of photos.\u003c/p\u003e\n\u003cp\u003eSo, I decided to write a little python wrapper. Using proxyman in between I analyzed the requests and contents and\nwas able to determine how the login procedure works. We first POST a request to the LOGIN URL and bind a session to\nthe \u0026lsquo;requests\u0026rsquo; structure , which we will use later to POST to the album URL (you need to do that first, it seems that\nthe PHPSESSID is then placed on the allowed list or something), which will result in a list of available photos. We can\nthen fetch the items by iterating over the dict and nested list and download the images. We do our best to not overload\nthe server so we delay every other request like a regular browser would also do.\u003c/p\u003e","title":"Fetch Flexweb images for backup purposes in Python"},{"content":"Leaving After 15 precious years with many ups and some downs (personally) I sadly decided to leave the mothership of Sue and be part of another company. I will announce which company later on, I need to make it a bit exciting to read right? ;-)\nLeaving after so many years is a difficult thing, or at least for me. Sue has been part for 3/4 of my professional life, 2 out of my 3 kids where born during my time at Sue. I was seriously ill a few years ago, which made me partially deaf, all during my time at Sue. It feels like close family!\nSo why leave? If you have read the above, it does not make much sense to leave right? Yeah, you are entirely correct. In my last two years at Sue, I opted to become one of the three Technical Field Managers (TFM), which I was chosen to do so. I enjoyed that a lot. The coaching part was part of my assignments more or less but never on such scale. I really like that. I am still technically up to speed, I recently recertified my CCNP. At the same time though I was also following a \u0026ldquo;Coaching\u0026rdquo; training, which I recently certified for as well.\nSadly my role was coming to an end and Sue and I could not find a role that would fit my ambition. I found another role in a different company that allows me to coach directly from start, and use my technical background as well.\nI look back with many many pleasant memories. Snow (Sue when I started working there) had always been a good and pleasant employer and allowed me to evolve where I am now. A big thank you for the company but even more for the people that work there, they supported me and where there when things went wrong on my side (see above); when my kids where born etc. So with pleasant memories and a tear, I will be saying goodbye officially at the end of August.\nDear people of Sue/Snow, colleague\u0026rsquo;s, friends, thank you very much for every smile, tear, friendship, up and down for the last 15 years. You have been great! I was lucky to work with all of you!\n","permalink":"https://www.evilcoder.org/posts/2021-08-01-leaving-sue/","summary":"\u003ch2 id=\"leaving\"\u003eLeaving\u003c/h2\u003e\n\u003cp\u003eAfter 15 precious years with many ups and some downs (personally) I sadly decided to leave the mothership\nof Sue and be part of another company. I will announce which company later on, I need to make it a bit\nexciting to read right? ;-)\u003c/p\u003e\n\u003cp\u003eLeaving after so many years is a difficult thing, or at least for me. Sue has been part for 3/4 of my\nprofessional life, 2 out of my 3 kids where born during my time at Sue. I was seriously ill a few years\nago, which made me partially deaf, all during my time at Sue. It feels like close family!\u003c/p\u003e","title":"Leaving Snow / Sue."},{"content":"CCNP - ENCOR \u0026lsquo;Recently\u0026rsquo; Cisco changed the CCNP track quite a lot. In the past it was composed of vertical colums so to say with Routing, Switching and Tshoot as seperated colums. This had the advantage that you could focus on pure Routing and pure Switching and no need to worry about bringing many different knowledge into the exam. To recertify you needed to pass one of the colums to pass and extend the certification for another three years.\nThat is no longer the case, instead of being vertical it is now horizontal. Meaning you need to bring a lot of broad knowledge to the table to be able to do the exam. There are many topics being passed and I will try to write some words about the topics. In order they are:\nSoftware Defined Networks, SD-WAN, SD-Access Wireless LISP Python Routing Switching But first I want to tell something about the changed world.\nChanges So as said the world is changing, and it is changing rapidly. We all can see and know that. But that goes for networks as well. When I started working (back in the dino days.. 2001) the world was rather easy. There were a few switches and a few routers and the network was logically devided. ACL\u0026rsquo;s where in place to protect things and life was good. We could \u0026ldquo;easily\u0026rdquo; oversee things and manually update them and everyone was mostly happy.\nBut then things changed, things like Virtual Machines started to appear, networks grew, demand grew. Wireless networks were introduced, people could connect from everywhere, with every device. No one is willing to connect to a cable anymore and our servers are placed everywhere and could live in Datacenter A, B, C or D. Automation appeared, we could spin up machines and configurations when needed and automatically deploy them. If there is need for more machines or facilities the cloud can auto scale and do magic things. That basically asks for connectivity that is all over. And if you move from location A to B, or a machine gets live migrated it still needs the same connectivity and be reachable on the same addressing. This is where SDN comes in.\nSoftware Defined Networking, SD-WAN, SD-Access The network is still build from routers and switches, but that is just a transport layer in the SDN world. SDN provides an overlay network, which in my view is organic and grows and shrinks where needed, but moves with devices and users. A user is no longer bound to an IP address, but instead is an object in the network which can live everywhere. As long as the user is reachable, the same connectivity is possible. The same goes for servers, as long as the server is reachable it doesn\u0026rsquo;t matter where the device lives. As long as the overlay network can find it, it works. If a wireless devices \u0026lsquo;roams\u0026rsquo; (moves between access points) it doesn\u0026rsquo;t matter if that is in building a or b, smart setups tunnel the traffic to the proper controller and process it.\nCisco created SD-WAN, SD-Access and uses VXLAN with VNI and VNID extensively for this purpose. It is a large piece of the CCNP ENCOR exam. You need to know what the previous terms are and how they are build and communicate.\nWireless In the previous incarnations of the CCNP exam, I never had much to do with WiFi networks or hardware like WLC\u0026rsquo;s. But people dont want cables anymore, they want quicker and better WiFi and get the best experience without that annoying cable that is always to short or limited in movement. So CCNP grew WiFi topics. But as Administrator you want it to be practical as well, so you connect it to the SD-fabric. You also need to pick the right antenna and need to understand what kind of interference you have and what all those magic numbers mean in the statistics and/or WLC. You need to study Wireless well!\nLISP I first assumed that this was a programming language, but is also a router locator/node locator protocol. Since devices can roam on multiple places and cross (in legacy thinking) boundaries, the network needs to know where to find an object. The LISP set of protocols and functions come into play to quicky find an EID and redirect traffic to the proper place. Or use border nodes / gateways to communicate to and from external networks.\nPython So who knew that a scripting or programming language (lets not start the debate on what it is!) was going to be part of the CCNP exam? I didn\u0026rsquo;t and there it was. You need to be able to query data from REST API\u0026rsquo;s and such and be able to understand how to handle them. That includes understanding JSON and how to handle it. You can use the postman application to test api\u0026rsquo;s and such and generate pythoncode for you, but you will need to understand them all for the ENCOR exam.\nRouting and Switching This is the part that I first learned with CCNP, there are still topics like STP, VLAN, VTP, EtherChannel, BGP, OSPF, EIGRP, HSRP, VRRP, GLBP, IPv6, IPv4, NAT, Multicast. Eventhough it isn\u0026rsquo;t as focussed as the Routing and Switching exams of the past, you still need to have solid understanding about them.\nSummary As you can see, the topics grew and got broader, many new techniques are being asked and you will need solid understanding of the above. You also need to know about some security applications like Umbrella and Stealthwatch, but the above topics are the largest I think.\nIf you want to discuss this with me, have ideas or disagree, just email me at webmaster a.t. evilcoder.org\n","permalink":"https://www.evilcoder.org/posts/2021-06-03-ccnp-recertified/","summary":"\u003ch2 id=\"ccnp---encor\"\u003eCCNP - ENCOR\u003c/h2\u003e\n\u003cp\u003e\u0026lsquo;Recently\u0026rsquo; Cisco changed the CCNP track quite a lot. In the past it was composed of vertical colums so to\nsay with Routing, Switching and Tshoot as seperated colums. This had the advantage that you could focus on\npure Routing and pure Switching and no need to worry about bringing many different knowledge into the exam.\nTo recertify you needed to pass one of the colums to pass and extend the certification for another three\nyears.\u003c/p\u003e","title":"Recertified for CCNP - ENCOR"},{"content":"Introduction For years I have been playing \u0026lsquo;Clash of Clans\u0026rsquo;. A strategy game that requires you to actively engage with others in either direct battles, or group based battles. You can improve your village by leveling-up housing, spells, troops, the town hall, heroes, having boosters etc. Ofcourse the makers allow you to play for free, which can take quite some time to get progress, or you can buy additional resources and improvements that speed up the process.\nI have been playing this since I think 2014, with in between some resting periods. I think you cannot play this continuously without being bored or frustrated that things take a long time and get more expensive (with internal resources) after every upgrade.\nMy last round was from 2016 till just a few weeks ago. I played for the clan Holland Dukes, a nice group of people, with a good mixture of young people and older people. Both are represented in the group as member, Elder and coLeaders.\nWhy writing additional scripts? You can view your stats mainly through the game, but that is more difficult when you want to get an overview of multiple users or see others stats. I always had a site for Holland Dukes, but it was not really filled with proper content. I planned on working with this when I tried to understand more of gohugo (a static website generator, written in go). So in the end I did. I needed to workout how Gohugo was going to help me with this first. If someone wants to know how I did that, let me know how I can help and I will try to assist.\nAfter having the skeleton for the site, which added Chart.JS and more stuff I needed to write several scripts that got me the API data. I decided to write these scripts in Python since it is multi-portable and can talk to API easily. I never wrote something like this before so it was a learning experience. I fetched the clan details because every detail needed was inside.\nClash of Clans The API code: Python Below is the code that I used to fetch the clan information from the API.\n#!/usr/bin/env python3 import requests, json, urllib, time home_token = \u0026#39;home_token here\u0026#39; # version used on webhost itself token = \u0026#39;production_token here\u0026#39; cur_time = int(str(time.time()).replace(\u0026#39;.\u0026#39;, \u0026#39;\u0026#39;)) clantag = \u0026#39;clan_tag\u0026#39; headers = {\u0026#39;Authorization\u0026#39;: \u0026#39;Bearer \u0026#39; + token } url_clan = \u0026#39;https://api.clashofclans.com/v1/clans/%23\u0026#39; + clantag[1:] players = [] all_info = {} out_all = \u0026#39;../json_data/clan-info.json\u0026#39; out_clan = \u0026#39;../json_data/clan-info-clan.json\u0026#39; out_players = \u0026#39;../json_data/clan-info-players.json\u0026#39; r = requests.get(url_clan, headers=headers) if r.ok: json_clan = json.loads(r.text) updated_line = { \u0026#34;updatedOn\u0026#34;: cur_time } clan_line = { \u0026#34;clan\u0026#34;: json_clan } all_info = updated_line all_info.update(clan_line) for player in json_clan[\u0026#39;memberList\u0026#39;]: player_tag = player[\u0026#39;tag\u0026#39;] url_player = \u0026#39;https://api.clashofclans.com/v1/players/%23\u0026#39; + player_tag[1:] rp = requests.get(url_player, headers=headers) if rp.ok: json_player = json.loads(rp.text) players.append(json_player) player_line = { \u0026#34;players\u0026#34;: players } all_info.update(player_line) with open(out_all, \u0026#34;w\u0026#34;) as fw_all: fw_all.write(json.dumps(all_info)) fw_all.close() with open(out_clan, \u0026#34;w\u0026#34;) as fw_clan: fw_clan.write(json.dumps(json_clan)) fw_clan.close() with open(out_players, \u0026#34;w\u0026#34;) as fw_players: fw_players.write(json.dumps(players)) fw_players.close() Parsing of the fetched data Next to fetching the data with the API, I needed to parse the returned JSON and do something with it. I write the name_ variables to a gohugo markdown file where it is being used in the templates to generate a result.\nFor gohugo to understand these examples I needed to escape the curly brackets later on.\nSure enough I could not use variables instead and print them directly, which would make the code a lot smaller, but from my POV that would reduce readability.\n#!/usr/bin/env python3 import json json_file_all = \u0026#39;json_data/clan-info.json\u0026#39; json_file = \u0026#39;../json_data/clan-info-players.json\u0026#39; with open(json_file) as f: data = json.load(f) for player in data: name = player[\u0026#39;name\u0026#39;] name_tag = player[\u0026#39;tag\u0026#39;] name_explevel = player[\u0026#39;expLevel\u0026#39;] name_thlevel = player[\u0026#39;townHallLevel\u0026#39;] name_trophies = player[\u0026#39;trophies\u0026#39;] name_besttrophies = player[\u0026#39;bestTrophies\u0026#39;] name_warstars = player[\u0026#39;warStars\u0026#39;] name_attackwins = player[\u0026#39;attackWins\u0026#39;] name_defensewins = player[\u0026#39;defenseWins\u0026#39;] name_bhlevel = player[\u0026#39;builderHallLevel\u0026#39;] name_versustrophies = player[\u0026#39;versusTrophies\u0026#39;] name_bestversustrophies = player[\u0026#39;bestVersusTrophies\u0026#39;] name_versusbattlewins = player[\u0026#39;versusBattleWins\u0026#39;] name_versusbattlewincount = player[\u0026#39;versusBattleWinCount\u0026#39;] name_role = player[\u0026#39;role\u0026#39;] name_donations = player[\u0026#39;donations\u0026#39;] name_donationsreceived = player[\u0026#39;donationsReceived\u0026#39;] name_clan = dict(player[\u0026#39;clan\u0026#39;]) if \u0026#39;townHallWeaponLevel\u0026#39; in player: name_townhallweaponlevel = player[\u0026#39;townHallWeaponLevel\u0026#39;] if \u0026#39;league\u0026#39; in player: name_league = dict(player[\u0026#39;league\u0026#39;]) name_achievements = list(player[\u0026#39;achievements\u0026#39;]) name_troops = list(player[\u0026#39;troops\u0026#39;]) name_heroes = list(player[\u0026#39;heroes\u0026#39;]) name_spells = list(player[\u0026#39;spells\u0026#39;]) outfile = \u0026#39;../content/players/\u0026#39; + name + \u0026#39;.md\u0026#39; with open(outfile, \u0026#34;w\u0026#34;) as fw: fw.write(\u0026#34;---\\n\u0026#34;) fw.write(\u0026#34;title: \u0026#39;Detail info \u0026#34; + name + \u0026#34;\u0026#39;\\n\u0026#34;) fw.write(\u0026#34;coc_css: true \\n\u0026#34;) fw.write(\u0026#34;aliases: [/players/\u0026#34; + name + \u0026#34;]\\n\u0026#34;) fw.write(\u0026#34;player_name: \u0026#34; + name + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_tag: \\\u0026#34;\u0026#34; + name_tag + \u0026#34;\\\u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_explevel: \u0026#34; + str(name_explevel) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_thlevel: \u0026#34; + str(name_thlevel) + \u0026#34;\\n\u0026#34;) if name_townhallweaponlevel: fw.write(\u0026#34;player_townhallweaponlevel: \u0026#34; + str(name_townhallweaponlevel) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_trophies: \u0026#34; + str(name_trophies) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_besttrophies: \u0026#34; + str(name_besttrophies) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_warstars: \u0026#34; + str(name_warstars) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_attackwins: \u0026#34; + str(name_attackwins) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_defensewins: \u0026#34; + str(name_defensewins) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_bhlevel: \u0026#34; + str(name_bhlevel) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_versustrophies: \u0026#34; + str(name_versustrophies) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_bestversustrophies: \u0026#34; + str(name_bestversustrophies) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_versusbattlewins: \u0026#34; + str(name_versusbattlewins) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_versusbattlewincount: \u0026#34; + str(name_versusbattlewincount) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_donations: \u0026#34; + str(name_donations) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_donationsreceived: \u0026#34; + str(name_donationsreceived) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;player_role: \u0026#34; + name_role + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;dict_achievements: \u0026#34; + str(name_achievements) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;dict_clan: \u0026#34; + str(name_clan) + \u0026#34;\\n\u0026#34;) if name_league: fw.write(\u0026#34;dict_league: \u0026#34; + str(name_league) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;dict_troops: \u0026#34; + str(name_troops) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;dict_heroes: \u0026#34; + str(name_heroes) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;dict_spells: \u0026#34; + str(name_spells) + \u0026#34;\\n\u0026#34;) fw.write(\u0026#34;---\\n\u0026#34;) fw.write(\u0026#34;\\{\\{\u0026lt; clan_player_structure url=\\\u0026#34;\u0026#34; + json_file_all + \u0026#34;\\\u0026#34; \u0026gt;\\}\\}\\n\u0026#34;) fw.write(\u0026#34;\\n\u0026#34;) fw.close() outfile_bb = \u0026#39;../content/builderbase/\u0026#39; + name + \u0026#39;.md\u0026#39; with open(outfile_bb, \u0026#34;w\u0026#34;) as fbb: fbb.write(\u0026#34;---\\n\u0026#34;) fbb.write(\u0026#34;title: \u0026#39;Builder Base info for \u0026#34; + name + \u0026#34;\u0026#39;\\n\u0026#34;) fbb.write(\u0026#34;coc_css: true \\n\u0026#34;) fbb.write(\u0026#34;aliases: [/builderbase/\u0026#34; + name + \u0026#34;]\\n\u0026#34;) fbb.write(\u0026#34;player_name: \u0026#34; + name + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_tag: \\\u0026#34;\u0026#34; + name_tag + \u0026#34;\\\u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_explevel: \u0026#34; + str(name_explevel) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_bhlevel: \u0026#34; + str(name_bhlevel) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_versustrophies: \u0026#34; + str(name_versustrophies) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_bestversustrophies: \u0026#34; + str(name_bestversustrophies) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_versusbattlewins: \u0026#34; + str(name_versusbattlewins) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_versusbattlewincount: \u0026#34; + str(name_versusbattlewincount) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_donations: \u0026#34; + str(name_donations) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_donationsreceived: \u0026#34; + str(name_donationsreceived) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;player_role: \u0026#34; + name_role + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;dict_achievements: \u0026#34; + str(name_achievements) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;dict_clan: \u0026#34; + str(name_clan) + \u0026#34;\\n\u0026#34;) if name_league: fbb.write(\u0026#34;dict_league: \u0026#34; + str(name_league) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;dict_troops: \u0026#34; + str(name_troops) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;dict_heroes: \u0026#34; + str(name_heroes) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;dict_spells: \u0026#34; + str(name_spells) + \u0026#34;\\n\u0026#34;) fbb.write(\u0026#34;---\\n\u0026#34;) fbb.write(\u0026#34;\\{\\{\u0026lt; clan_player_base_structure url=\\\u0026#34;\u0026#34; + json_file_all + \u0026#34;\\\u0026#34; \u0026gt;\\}\\}\\n\u0026#34;) fbb.write(\u0026#34;\\n\u0026#34;) fbb.close() ","permalink":"https://www.evilcoder.org/posts/2021-05-14-clashofclans-python/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eFor years I have been playing \u0026lsquo;Clash of Clans\u0026rsquo;. A strategy game that requires you to actively engage with others in either\ndirect battles, or group based battles. You can improve your village by leveling-up housing, spells, troops, the town\nhall, heroes, having boosters etc. Ofcourse the makers allow you to play for free, which can take quite some time to get\nprogress, or you can buy additional resources and improvements that speed up the process.\u003c/p\u003e","title":"Clash of Clans Python Scripts"},{"content":"Introduction For a while now, my hosting company JR-Hosting, was stopped. But I still needed a way to host my mailboxes and that of several other domains that I own. It was brought to my attention that while playing with Docker I could combine that with a mailserver setup. Mailcow, or better Mailcow Dockerized. As there was a previous version which did not use Docker :-).\nThe setup is trivial Installation link or at least it was for me. I used one of the components before, at home, for JRHosting and at work where I created the foundation for the currently still in use rspamd setup at work.\nMailcow Mailcow consists of several helper programs, like rspamd for antispam filtering, dovecot for hosting the mailboxes (storage and delivery/filtering), postfix for the actual mailserver, nginx and php-fpm for the webpages that are hosted on the platform.\nRspamd As mentioned one of the applications is rspamd, this is a fast and modulair anti spam system that uses several modules and feedback from those modules to define a score to a message. It\u0026rsquo;s not soley bayes or rbls that form the strength of rspamd. It is written by Vsevolod Stakhov, a hardliner when encountering things that are just not right (outspoken), but a very pragmatic programmer. You need to convince him when there is an error, you will not always get an easy time, but given my years of seeing him in action, he always evaluates your feedback and if found proven he will fix the issues.\nGrafana One of the things that are lacking in all applications, is a dashboard for the environment. And truth to be told, you can create a dashboard for the individual applications, some of them even support prometheus exporter output. But there is no general way to see the individual state. And this is likely not to come, everyone\u0026rsquo;s needs are different. What works for me is an specific overview, and what works for you is another specific overview. Those might not align or even look a like.\nDesigning my own So, there is this thing called \u0026lsquo;mailcow-exporter\u0026rsquo;, I use this one from Docker: thej6s/mailcow-exporter which reads your mailcow parameters via the API and exposes them as prometheus understandable format.\nI then started experimenting with Grafana, I know the tool enough for my needs but I like to play around every now and then to make it even better. I am not graphically oriented though so there are things that might be better.\nIt worked out for the following dashboard, note that I set the timer on \u0026ldquo;5\u0026rdquo; minutes for some of the graphs because I used updated my mailcow installation and some containers where counted twice :-) :\nWant more? If you are interested in the above dashboard, poke me and we\u0026rsquo;ll see how we can arrange for you to have the dashboard. If you have suggestions, please let me know as well!\nYou can also find the direct json file here: https://github.com/remkolodder/mailcow-dashboard\n","permalink":"https://www.evilcoder.org/posts/2021-05-13-mailcow-grafana/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eFor a while now, my hosting company JR-Hosting, was stopped. But I still needed a way to host my mailboxes\nand that of several other domains that I own. It was brought to my attention that while playing with Docker\nI could combine that with a mailserver setup. Mailcow, or better Mailcow Dockerized. As there was a previous\nversion which did not use Docker :-).\u003c/p\u003e\n\u003cp\u003eThe setup is trivial \u003ca href=\"https://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/\"\u003eInstallation link\u003c/a\u003e\nor at least it was for me. I used one of the components before, at home, for JRHosting and at work where\nI created the foundation for the currently still in use rspamd setup at work.\u003c/p\u003e","title":"Mailcow Grafana Dashboard"},{"content":"Today my first Sue Student Edition happened. Well not entirely true, it happened a few times already and I was present most of the times. But today was somewhat special. I gave the keynote before my coworker Tijmen did a presentation + workshop. And I recapped the event afterwards.\nDigital edition As Covid is still around very much and restrictions are in place all over, we needed to host this event on our digital platform. For me that was the first time doing a talk on a digital event. Ofcourse as coach for Sue I do this daily, but not as speaker with polls and such. It is good that we practised a bit upfront to be familiar with the tools.\nOne of my main worries was that with live audience you can influence the setting a lot. You can interact with people and see the audience and whether they pay attention or fell asleep. With a digital audience you cannot see those signs that easily. People are not required to enable their camera for example, so things can be hidden from you. People can watch different things and pretend that they are watching your talk.\nBut, at the start we had a good conversation with people from the audience and we talked about the difference between live and digital education. They too find it more difficult to interact with others during digital sessions. Ofcourse there are benefits as well. You can sleep a bit longer because no need to travel.\nThe talks I gave a talk about \u0026ldquo;Infrastructure and security\u0026rdquo;, zooming in on Security Essentials and using the C-I-A triad. By using several polls I tried to interact with the audience, which worked out above expectation. Since I have long term experience in the field I could use several examples from actual situations and found a few more on the internet which I used in the slides as well. During the talk we had several discussions with the audience which made the talk interactive. Thank you for that to the audience! I went a bit over time because of that and I think it is awesome to have such interaction. I was allowed to introduce my colleague Tijmen who took over the talk and went in depth with security concepts, tools and offered an actual hacking workshop with selfmade boxes.\nThanks I think the event was a succes and feedback from the audience suggested that as well. It could not have been such a success without the help and support of Sue B.V., Laura, Tijmen, Koen, Patrick, Raimond and ofcourse the audience. I hope that I will be able to attend and/or talk at such an event again and hopefully meet you again where we can discuss during a drink.\n","permalink":"https://www.evilcoder.org/posts/2021-04-22-sue-student-edition/","summary":"\u003cp\u003eToday my first Sue Student Edition happened. Well not entirely true, it happened a few times already\nand I was present most of the times. But today was somewhat special. I gave the keynote before my\ncoworker Tijmen did a presentation + workshop. And I recapped the event afterwards.\u003c/p\u003e\n\u003ch2 id=\"digital-edition\"\u003eDigital edition\u003c/h2\u003e\n\u003cp\u003eAs Covid is still around very much and restrictions are in place all over, we needed to host this event\non our digital platform. For me that was the first time doing a talk on a digital event. Ofcourse as\ncoach for Sue I do this daily, but not as speaker with polls and such. It is good that we practised a\nbit upfront to be familiar with the tools.\u003c/p\u003e","title":"Sue Student Edition 2021"},{"content":"For some time I am a perl fan, but perl is not that popular anymore so I decided to try and see whether I can use Python as well. After moving my systems from FreeBSD to Ubuntu and Debian (proxmox needs it) I also used the \u0026rsquo;emerging-ipset-update.pl\u0026rsquo; script to drop emerging treats as soon as possible.\nPython After or rather while following the Udemy\u0026rsquo;s \u0026lsquo;2020 Complete Python bootcamp: From Zero to Hero\u0026rsquo; by Jose Portilla (Hi!) I decided that I could rewrite the perl script into python. And so I did. Below is the version that resulted from that effort. It can surely be smarter, so poke me on my email address if that is possible and I\u0026rsquo;ll update this. Thanks Jose for your great course! I appreciate it!\nThe script #!/usr/bin/env python3 # ## 25/09/2020: ## Python based E.T parser by Remko Lodder \u0026lt;remko@elvandar.org\u0026gt; ## ## The script is based on the original perl script by Joshua Gimer and an unknown author who created ## my reference version: https://doc.emergingthreats.net/pub/Main/EmergingFirewallRules/emerging-ipset-update.pl.txt ## ## The netaddr functionality in the form of IPNetwork and cidr merge ## are taken from the website: http://www.korznikov.com/2014/08/creating-black-list-of-ips-for-iptables.html ## Thank you for the pointers there, which I shamelessly used to create this variant. ## ## The script fetches the ip addreses/ranges that are potential treats and creates an ipset list from it. ## The ipset list is then used by iptables to produce a working firewall set. import time import urllib.request import os import re import syslog from netaddr import * from socket import timeout from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError # Prototype variables n = False # Bootup messaging and syslog syslog.openlog(logoption=syslog.LOG_PID, facility=syslog.LOG_INFO) syslog.syslog(\u0026#39;Starting Emerging Threats (ET) IPTables update script....\u0026#39;) print (\u0026#39;Starting Emerging Threats (ET) IPTables update script....\u0026#39;) # Two times a day timer timer = 43200 # Sleep a bit after an timeout. timeout_timer = 120 # The location of the Emerging Threats revison number file. emerging_root = \u0026#39;https://rules.emergingthreats.net/fwrules\u0026#39; emerging_fwrev = emerging_root + \u0026#39;/FWrev\u0026#39; emerging_fwrules = emerging_root + \u0026#39;/emerging-Block-IPs.txt\u0026#39; # Temporary files tmp_dir = \u0026#39;/tmp\u0026#39; rules_file = tmp_dir + \u0026#39;/emerging_iptables2.txt\u0026#39; # Binary location iptables = \u0026#39;/sbin/iptables\u0026#39; ipset = \u0026#39;/sbin/ipset\u0026#39; # Iptable chains iptables_att_chain = \u0026#39;ATTACKERS\u0026#39; iptables_drop_chain = \u0026#39;ETLOGDROP\u0026#39; # ipset names ipset_botccnet = \u0026#39;botccnet\u0026#39; # Get the current IPTables ruleset revison number. def get_fw_rev(): response = urllib.request.urlopen(emerging_fwrev) data = response.read() text = data.decode(\u0026#39;utf-8\u0026#39;) return text # Get the firewall rules and ignore the errors def get_fw_rules(): try: with urllib.request.urlopen(emerging_fwrules) as response, open(rules_file, \u0026#39;wb\u0026#39;) as out_file: data = response.read() # a `bytes` object out_file.write(data) except HTTPError as e: print(\u0026#39;There was an error: \u0026#39;, e.code) pass except URLError as e: print(\u0026#39;Something went wrong in reaching the server: \u0026#39;, e.reason) pass except ConnectionResetError: print(\u0026#39;---\u0026gt; Connection reset, retrying in \u0026#39; + timeout_timer) time.sleep (timeout_timer) process_et_rules() pass except timeout: print(\u0026#39;---\u0026gt; Connection timed out, retrying in \u0026#39; + timeout_timer) time.sleep (timeout_timer) process_et_rules() pass # Be able to fork a process def parent_child(): n = os.fork() # If N is \u0026gt;0 then a child had not yet been forked (we are in master process) if n \u0026gt; 0: # Setup first tables, they will not be readded later on. os.system(iptables + \u0026#39; -N \u0026#39; + iptables_att_chain) os.system(iptables + \u0026#39; -N \u0026#39; + iptables_drop_chain) # Flush previously assigned iptables and ipset parameters os.system(iptables + \u0026#39; -F \u0026#39; + iptables_drop_chain) os.system(iptables + \u0026#39; -F \u0026#39; + iptables_att_chain) os.system(iptables + \u0026#39; -D FORWARD -j \u0026#39; + iptables_att_chain) os.system(iptables + \u0026#39; -D INPUT -j \u0026#39; + iptables_att_chain) # Create new iptables and ipset parameters os.system(iptables + \u0026#39; -I FORWARD 1 -j \u0026#39; + iptables_att_chain) os.system(iptables + \u0026#39; -I INPUT 1 -j \u0026#39; + iptables_att_chain) os.system(iptables + \u0026#39; -A \u0026#39; + iptables_drop_chain + \u0026#39; -j LOG --log-prefix \u0026#34;ET BLOCK: \u0026#34;\u0026#39;) os.system(iptables + \u0026#39; -A \u0026#39; + iptables_drop_chain + \u0026#39; -j DROP\u0026#39;) # Remove current ipset list, and recreate it. os.system(ipset + \u0026#39; -X \u0026#39; + ipset_botccnet) os.system(ipset + \u0026#39; -N \u0026#39; + ipset_botccnet + \u0026#39; nethash\u0026#39;) # Create the ipset matching chain os.system(iptables + \u0026#39; -A \u0026#39; + iptables_att_chain + \u0026#39; -p ALL -m set --match-set \u0026#39; + ipset_botccnet + \u0026#39; src,src -j \u0026#39; + iptables_drop_chain) os.system(iptables + \u0026#39; -A \u0026#39; + iptables_att_chain + \u0026#39; -p ALL -m set --match-set \u0026#39; + ipset_botccnet + \u0026#39; dst,dst -j \u0026#39; + iptables_drop_chain) # else if N = 0 then that means that we are in child mode and can start processing the et rules. else: process_et_rules() def process_et_rules(): # Reset revision number before starting. ip_list = [] rev_num = 0 while True: old_rev_num = rev_num rev_num = get_fw_rev() if int(rev_num) \u0026gt; int(old_rev_num): get_fw_rules() # loop through rules file, remove empty lines and process the remaining ip # by using a regular expression that matches \u0026lt;wordboundary\u0026gt;ipaddress\u0026lt;wordboundary\u0026gt; with open(rules_file, \u0026#34;r\u0026#34;) as read_file: lines = [line for line in read_file.readlines() if line.strip()] for line in lines: line = line.strip() if re.findall(r\u0026#39;\\b\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\/?[0-9]{1,2}?\\b\u0026#39;,line): ip_list += [IPNetwork(line)] ip_list = cidr_merge(ip_list) amount_addresses = len(ip_list) # Flush entries in the ipset list. os.system(ipset + \u0026#39; flush \u0026#39; + ipset_botccnet) for ip_address in ip_list: # Add new entries from the list. os.system(ipset + \u0026#39; -A \u0026#39; + ipset_botccnet + \u0026#39; \u0026#39; + str(ip_address)) # Summarize what we did, so that we have an idea how many addresses should be there. syslog.syslog(\u0026#34;Wrote %d addresses to the ipset list, using ET version %d, going to sleep....\u0026#34; % (amount_addresses,int(rev_num))) time.sleep (timer) else: syslog.syslog(\u0026#34;The old version: %d is the same as the current version: %d... sleeping a bit and retry....\u0026#34; % (int(old_rev_num), int(rev_num))) time.sleep (timer) # If a child is not yet found, fork if n == 0: parent_child() ","permalink":"https://www.evilcoder.org/posts/2020-09-26-emerging-threats-python/","summary":"\u003cp\u003eFor some time I am a perl fan, but perl is not that popular anymore so I\ndecided to try and see whether I can use Python as well. After moving my\nsystems from FreeBSD to Ubuntu and Debian (proxmox needs it) I also used the\n\u0026rsquo;emerging-ipset-update.pl\u0026rsquo; script to drop emerging treats as soon as possible.\u003c/p\u003e\n\u003ch2 id=\"python\"\u003ePython\u003c/h2\u003e\n\u003cp\u003eAfter or rather while following the Udemy\u0026rsquo;s \u0026lsquo;2020 Complete Python bootcamp:\nFrom Zero to Hero\u0026rsquo; by Jose Portilla (Hi!) I decided that I could rewrite the\nperl script into python. And so I did. Below is the version that resulted from\nthat effort. It can surely be smarter, so poke me on my email address if that\nis possible and I\u0026rsquo;ll update this. Thanks Jose for your great course! I\nappreciate it!\u003c/p\u003e","title":"Emerging Threats Python script"},{"content":"At the end of the month, my company JR-Webscripting en Hosting, will cease to exist. Justin and myself thought things over and with changes in our lives in the entire last year(s), we decided that we needed to spend our time differently. The time and effort of running your own business in the (flooded) hosting market is extremely difficult. You need scale, and offer an interesting price. And it will cost you countless hours. For us, it was worth this effort for almost 15 years, but the balance was lost not that long ago. While our reasons where not in the financial sector (In my opinion, the main reason for stopping most businesses), it does play along. The margins are small with our pricing scheme, and the benefits are low. Too low factually. What weight most for me was the investment of time that I can no longer realise that easily.\nSo, since the beginning of october we decided to inform our customers and ask them to move to another provider at the end of the contract. All our customers migrated quickly and recently we migrated away the last customer. That marked the end of the hosting servers, we have them on shutdown and they will be removed in a little. The site is still available as placeholder on my host and mail for the domain will still be incoming for a little more.\nFor our customers and interested parties: Thank you for the trust in the past 15 years! It had been a blast.\n","permalink":"https://www.evilcoder.org/posts/2020-03-19-jrhosting-ended/","summary":"\u003cp\u003eAt the end of the month, my company JR-Webscripting en Hosting, will cease to exist. Justin and\nmyself thought things over and with changes in our lives in the entire last year(s), we decided\nthat we needed to spend our time differently. The time and effort of running your own business in\nthe (flooded) hosting market is extremely difficult. You need scale, and offer an interesting price.\nAnd it will cost you countless hours. For us, it was worth this effort for almost 15 years, but\nthe balance was lost not that long ago. While our reasons where not in the financial sector\n(In my opinion, the main reason for stopping most businesses), it does play along. The margins are\nsmall with our pricing scheme, and the benefits are low. Too low factually. What weight most for\nme was the investment of time that I can no longer realise that easily.\u003c/p\u003e","title":"The end of JR-Hosting"},{"content":"On my old blog, I tried to remember the passing of Freddy Mercury periodically. I failed to do that for several years. Freddy, wherever you may be, rest in peace. I still remember your music and listen to it very often. I visited the concerts of the remaining band members of Queen, it\u0026rsquo;s still amazing, even without you.\nOne day, we will meet!\nImage used from express.co.uk and referenced from them: ","permalink":"https://www.evilcoder.org/posts/2019-11-24-im-freddy-mercury/","summary":"\u003cp\u003eOn my old blog, I tried to remember the passing of Freddy Mercury periodically.\nI failed to do that for several years. Freddy, wherever you may be, rest in peace.\nI still remember your music and listen to it very often. I visited the concerts\nof the remaining band members of Queen, it\u0026rsquo;s still amazing, even without you.\u003c/p\u003e\n\u003cp\u003eOne day, we will meet!\u003c/p\u003e\n\u003cp\u003eImage used from express.co.uk and referenced from them:\n\n  \u003cimg loading=\"lazy\" src=\"https://cdn.images.express.co.uk/img/dynamic/35/590x/Freddie-Mercury-death-His-final-hours-1208498.jpg?r=1574550506127\" alt=\"Freddy Mercury\"  title=\"Freddy Mercury\"  /\u003e\u003c/p\u003e","title":"In Memoriam Freddy Mercury 5-9-1946 - 24-11-1991"},{"content":"As the title states (in dutch sorry it\u0026rsquo;s the original name, but you can translate it to: autumn conference NLUUG2019), I was at the NLUUG 2019. My last visit with Snow (now Sue) was when it was still in \u0026ldquo;De Reehorst\u0026rdquo; in Ede. Which appeared to have been a few years ago. One of the reasons for not visiting it more often is that I was soo much into FreeBSD that I didn\u0026rsquo;t look around that much. That changed this year when I stopped volunteering for FreeBSD. I got my RHCSA and RHCE this year and as technical field manager I decided to show my face and talk with people (and learn something myself as well).\nTo quickly summarize the day: It was great but exhausting :-).. numberous talks with coworkers but also old friends (Hi Johan, Rene, Ronny, Alain, Cor, I mean you for example!). For me this was a first time as technical field manager at a conference. The SUE team was the biggest team of them all I think so that was great to see. Thank you all coworkers for your visit and your big enthusiasm. I hope that we can visit many more of these conferences!\nAbout the talks:\nThe keynote (by David Blank-Edelman) at first was very interesting and eye-opening. Not only because of the talk, but also because of the employer of David, Microsoft. A while ago I found that MS was seen more and more on open-source territory and I think this demonstrates that I am right about that. Nevertheless we got a serious talk about Site Reliability Engineering and what angles you can look at it. The main thing that David demonstrated is that \u0026ldquo;it depends\u0026rdquo;. You need to find a way to first describe your company and wishes, but also your clients. Reliability in this regard is measured at the client instead of internal monitoring/metric gathering. For example if you have 100 servers and a few of them go down, are you in panic ? It depends! If the customer doesn\u0026rsquo;t notice anything, just keep chilling. If the client does notice anything, like not being able to access your site, or add items to the shopping cart.. then make sure it starts working again! You need to define SLI and SLO\u0026rsquo;s (Service Level indicators, which metrics at what \u0026ldquo;item\u0026rdquo; in the business chain and Service Level Objectives: How do you want things to perform, what is your goal). I think you can do that with proper setup monitoring that not only checks availability but also does an actual login, or an actual shopping cart experience. When I worked for a major ISP in .NL they did that with Selenium. The setup replicated a user login from various remote places.\nMy next talk was about making scripts better by my old coworker and friend Michael Boelen. I enjoyed this talk, I am experienced in writing shell scripts, but still I learned a few things that I didnt know before. I also understand better on how to approach creating new scripts and what the caveats are for being posix compliant. Michael had the crowd on his hand, he got a very interactive talk and still keeping track on the objective from his talk. It was appreciated Michael!\nAfter that I visited the talk about \u0026ldquo;treating documentation as code\u0026rdquo; by Hagen Bauer. I visited that with in my huge documentation experience from FreeBSD. Hagen has a setup where he uses asciidoctor and some modifications to print his documentation in various formats. It can use external input as well as just regular \u0026lsquo;md\u0026rsquo; kind of files. So once prepared, you can more easily write documentation just in ascci/plain text and if you have set it up with a CI/CD pipeline for example you can auto generate new documentation when you have a commit or change in your GIT repo. I think this lowers the barrier if there is a template and all you need to do is fill in an text file (With some markup). If I ever have time, this is really interesting to understand better!\nAfter Hagen\u0026rsquo;s talk we had lunch. There were many things to choose from, a dangerous approach because of an after lunch dip risk.\nNext up after the lunch was a talk from Koen de Jonge, board member for NLUUG. His talk was about a dream or idea: Community Hosted Open Source Infrastructure. (CHOSI.org). This dream started with how we (I include myself there) used to learn and do things, touch real hardware, modify kernels, wait ages for an kernel or program compilation completed and noticing that you made a mistake. Where nowadays people \u0026ldquo;take care\u0026rdquo; of you. The educational value of having in-depth knowledge about products is going away. The cloud offers items that you dont have to touch at all. Ofcourse the cloud infra people need to do so, but that group of people is being reduced in the world. See for example RHCE8, you learn ansible.. which is a great and fun thing to do, but you dont learn in-depth technical hacks on the commandline anymore. With this talk Koen tried to take the audience to a world which we knew from the past and bring that back. The general idea is to have at least one or more racks with own equipment which you can use to start all kinds of vm\u0026rsquo;s, from linux to a bsd to solaris. With a \u0026ldquo;bierviltje\u0026rdquo; calculation he noticed the required funding which would be approx 12 euro per user. I am very much interested!\nFurther down the road I visited the XS4all moet blijven talk from Anco Scholte ter Horst, current CEO of \u0026ldquo;Freedom Internet\u0026rdquo; (The new XS4all). Anco took us down the road which was followed after announcing that XS4all needs to be assimilated by KPN. He told us about the fight they have put up to save the company, to see alternatives and finally after another reply from KPN that they were going to assimilate XS4all and drop the company behind it, the birth of Freedom Internet. A very nice and driven talk from Anco, the crowd was very interested including myself. Lets see where this is heading and hopefully they can do what they want to do!\nNext in line I visited \u0026ldquo;what does vNUMA actually mean?\u0026rdquo; by Wim ten Have. Wim appears to be extremely technical and possesses knowledge that not many people have. During this track I was from time to time very lost. Not because Wim didn\u0026rsquo;t explain, but because I didn\u0026rsquo;t cope the in depth knowledge. NUMA is known for making processors able to directly access memory regions. In order to use that effectively you should combine CPU\u0026rsquo;s that are near each other and share the same NUMA domain. vNUMA is an addition to QEMU / KVM and enables automatic mapping of NUMA within a VM basically. (Someone correct me if I am wrong ;-)). Sadly, if you are not that experienced you will loose much of the information presented.\nJust before attending the last talk, I joined Martin Geusebroek\u0026rsquo;s talk about \u0026ldquo;Counter Social Engineering\u0026rdquo;. Martin is an experienced HUMINT officer and extremely knowledgeable about this subject. He gave multiple examples on how things work. There were a few demo\u0026rsquo;s / recordings of social engineering stuff that actually worked. Next to that he also did a \u0026ldquo;remember the given names\u0026rdquo;, where he did try to influence your brain. In the recap we were presented the given names. One of them was not in the list but your head thought it was, because a lot of the words shared the same topic. Your brain fills in the details. That makes social engineers able to influence you without you actually realising. When I worked at ING we had such trainings periodically and then for entire days. Always think who asks what and why. Try to properly verify someone\u0026rsquo;s identity and when in doubt, just get help from senior management. Even if the director is giving you a hard time, it\u0026rsquo;s in his companies best interest if you are very firm and solid in your work. Thanks Martin for bringing this topic to NLUUG!\nand finally the closing keynote: Tales (Fails) from the trenches… by Edwin den Andel. Edwin is a very classic hacker, in the sense that the name hacker was actually ment. Edwin is creative and thinks out of the box to try and obtain information. Not for the worse but for the better. Nowadays hackers are seen as nerds that break into computers and steal stuff. Edwin and Zerocopter behind him try to address that. They want to receive vulnerability information and highly suggest that you do not download entire datasets, but just one or two rows to proof that you can access data. Else you will get into the dark mazes of the law and you might even be prosecuted. Next to advocating the right thing, Edwin also gave numberous examples on how companies failed. I felt really connected with this topic and I think a lot of people found this the best talk of the day. Edwin easily presents his knowledge and is easy to follow. Edwin, I enjoyed your talk a lot, thank you!\nAfter all these talks people needed drinks and beverages. My employer SUE sponsored these and a lot of people stayed until it was time to wrap up. Together with a few coworkers from SUE, we were the last ones to leave the conference. I hope to be able to rejoin the NLUUG conference next year, either in my current role or in a new role.\n","permalink":"https://www.evilcoder.org/posts/2019-11-22-nluug-2019/","summary":"\u003cp\u003eAs the title states (in dutch sorry it\u0026rsquo;s the original name, but you can translate it to: autumn conference NLUUG2019), I was at the NLUUG 2019.\nMy last visit with Snow (now Sue) was when it was still in \u0026ldquo;De Reehorst\u0026rdquo; in Ede. Which appeared to have been a few years ago. One of the\nreasons for not visiting it more often is that I was soo much into FreeBSD that I didn\u0026rsquo;t look around that much. That changed this year\nwhen I stopped volunteering for FreeBSD. I got my RHCSA and RHCE this year and as technical field manager I decided to show my face and talk with\npeople (and learn something myself as well).\u003c/p\u003e","title":"NLUUG 2019 Najaarsconferentie"},{"content":"Voor Sue is het opleiden van onze mensen belangrijk. Wij zorgen er dan ook voor dat onze collega\u0026rsquo;s door ons (Fieldmanagers) bezocht worden, om zaken op technisch vlak maar ook opleidings vlak door te spreken en regelmatig te plannen waarmee en hoe iemand zich wil ontwikkelen. Vervolgens maken we dat als bedrijf mogelijk door opleidingen te faciliteren, studiedagen al contractueel beschikbaar te stellen, de kosten te betalen van de opleiding EN de certificering.\nIn het bedrijfsleven, werkende voor Sue, ben ik dus gewend dat er een goede strikte focus is op ontwikkeling en opleiding.\nWij verzorgen regelmatig opleidingen in-house, en geven die of zelf met onze gecertificeerde collega\u0026rsquo;s of er komt een leraar extern binnen om de opleiding te geven. De groepen worden niet te groot gemaakt zodat een ieder de benodigde aandacht kan krijgen. Tijdens onze studie dagen/weken is het mogelijk om savonds te blijven mee eten en met elkaar kennis te kunnen delen. Wij staan dus voor een kwalitatief hoogwaardig leertraject.\nMijn vrouw is leerkracht, een werkgebied waar ze voor gekozen heeft om onze toekomst (de mensen die ons straks moeten onderhouden!) te begeleiden en op te kunnen leiden zodat ze een fijne periode in de maatschappij kunnen hebben, hun steentje kunnen bijdragen en goed opgeleid ten tonele komen.\nIk had verwacht dat in het onderwijs, waar de dienst \u0026ldquo;het opleiden van onze toekomst\u0026rdquo; is, een veel grotere focus heeft dan het commerciele bedrijfsleven. Helaas is niets minder waar. Er zijn heel veel administratieve handelingen. Er komen steeds meer leerlingen in een groep, waardoor persoonlijke aandacht bijna niet realiseerbaar is. Er moeten ontzettend veel zaken buiten schooltijd worden gedaan en geregeld. Ouders worden steeds mondiger en verwachten meer en meer van de leraar, ze worden zelfs boos als het kind in kwestie niet op het niveau scoort die ze graag zien. Door al die zaken is het in mijn ogen ontzettend moeilijk om kwalitatief hoogwaardig onderwijs te kunnen leveren.\nOndanks de grote hoeveelheid werk en taken, is er een enorme onderwaarding voor het werk, een fulltime docent werkt in mijn ogen minstens 50% buiten schooltijd (geen lestijd!) extra. We hebben het dan over dik 60 uur per week. In het bedrijfsleven zouden deze leraren daar een goed salaris en voorzieningen voor krijgen (het is immers de primaire dienst). In plaats daarvan worden leraren beloond met meer werk, meer administratie, en meer kinderen in de klas.\nAls wij straks zelf oud zijn, en we kunnen niet de aandacht geven die nodig is, wie is er dan nog goed genoeg opgeleid om ons te verzorgen? Is het echt nodig dat we hierover twijfelen en ons zorgen maken? Niet als er een juiste en gelijkwaardige waardering voor het werk is.\nIk heb ontzettend veel respect voor de leraren die gestaakt hebben en acties opzetten om hier aandacht voor te vragen. Ja het is soms niet zo handig als je kind thuis zit (geldt net zo goed voor ons), maar het is het waard. Het gaat over onze toekomst! De overheid en samenleving moet dit goed regelen, niet 1 malig maar structureel. Ik hoop dat wij daarmee ook zorgeloos van onze oude dag kunnen genieten.\nDank, Remko\n","permalink":"https://www.evilcoder.org/posts/2019-11-08-aandacht-voor-de-leraar/","summary":"\u003cp\u003eVoor Sue is het opleiden van onze mensen belangrijk. Wij zorgen er dan ook voor dat onze collega\u0026rsquo;s door ons (Fieldmanagers) bezocht worden, om zaken op technisch vlak maar\nook opleidings vlak door te spreken en regelmatig te plannen waarmee en hoe iemand zich wil ontwikkelen. Vervolgens maken we dat als bedrijf mogelijk door opleidingen te\nfaciliteren, studiedagen al contractueel beschikbaar te stellen, de kosten te betalen van de opleiding EN de certificering.\u003c/p\u003e","title":"Aandacht voor de leraar"},{"content":"If you would have asked me a few years ago, whether I was going to migrate my servers to Linux? I would have laughed and not even consider it. Since 2004 I have hosted all my own servers on the FreeBSD OS. I had one CentOS machine, because OpenXchange on FreeBSD was not the best experience. But now, in 2019, all my servers are running one of the Linux OS\u0026rsquo;es. Mainly Ubuntu.\nHow did we get there? Short summary: I did not feel at home anymore.\nLarger summary: The creation of the Code of Conduct within FreeBSD made me frown a lot, and still does. It\u0026rsquo;s largely American oriented and does not take non-American stuff in consideration, or not enough. The current leadership is more worried about personal social media posts and how to respond to that then about guiding the project into the next phase. The world is not entirely American and different people with different cultures were welcome within FreeBSD. My personal feeling is that that is no longer the case.\nI realise that if you read this, this might make you frown as well. I am a long standing community member, which covers a large part of my adult life. Does all this outweigh my long-term connection to the project? Yes.\nBeyond the \u0026ldquo;social\u0026rdquo; side of the project, I also think that while being conservative, we missed the boat on multiple occassions. Things come in late, or are not \u0026ldquo;addressed\u0026rdquo; at all. Take containers. They are the current hype for microservices. There is no way to do something with that within FreeBSD. FreeBSD has jails, which is a more heavy weight container-kind-of-solution. Or better said it is a more lightweight virtual machine instead.\nTools that use containers, like Gitlab CI/CD and many other things make use of those services. FreeBSD just does not have them. It\u0026rsquo;s not sexy enough to run it in your DC. Sadly I do not see much activity company wise in the Netherlands either that suggests that I am wrong. Most things that I do see in my professional life are Linux related machines.\nIs this the end for me? With my current FreeBSD implementations, yes. All my machines are migrated to Linux, there are no exceptions anymore. This makes it easier for my automation tooling, because everything runs on the same foundation and files can be found on the same place. Same goes for packages etc.\nFarewell FreeBSD, you have served me well and I think that I earned the right to use it by all my contributions. I hope that a less politically minded core team stands up at some point and changes the game. Perhaps that will make me rejoin the project that I once was so proud of.\n","permalink":"https://www.evilcoder.org/posts/2019-09-10-migrated-to-linux/","summary":"\u003cp\u003eIf you would have asked me a few years ago, whether I was going to migrate my servers to Linux?\nI would have laughed and not even consider it. Since 2004 I have hosted all my own servers on the\nFreeBSD OS. I had one CentOS machine, because OpenXchange on FreeBSD was not the best experience.\nBut now, in 2019, all my servers are running one of the Linux OS\u0026rsquo;es. Mainly Ubuntu.\u003c/p\u003e","title":"Migrated to Linux"},{"content":"A few years ago, I was informed that Paul Schenkeveld had passed away. That was very unpleasant news ofcourse. I knew Paul for some years, at the D-BUG or NLUUG BSD days he was one of the organisers and I was one of the speakers back then. In addition he was one of the main organisers of the 2011 EuroBSDCon in Maarssen. I always saw Paul.. and then Cor.. or the other way around.\nSo when I saw Cor at the NLUUG a few days ago.. I missed Paul ofcourse. I had not seen Cor for a few years and not after Paul\u0026rsquo;s passing. Seeing Cor alone instantly reminded me of Paul. You are still missed Paul. Rest in peace!\nImage taken from db.net where both Paul (Left) and Cor (Right) appeared on photo.\n","permalink":"https://www.evilcoder.org/posts/2019-11-24-im-paul-schenkeveld/","summary":"\u003cp\u003eA few years ago, I was informed that Paul Schenkeveld had passed away. That\nwas very unpleasant news ofcourse. I knew Paul for some years, at the D-BUG\nor NLUUG BSD days he was one of the organisers and I was one of the speakers\nback then. In addition he was one of the main organisers of the 2011 EuroBSDCon in\nMaarssen. I always saw Paul.. and then Cor.. or the other way around.\u003c/p\u003e","title":"In Memoriam Paul Schenkeveld 1963-2015"},{"content":"A while ago, my dear colleague Mattijs came with an interesting option in BIND. Response zones. One can create custom \u0026ldquo;zones\u0026rdquo; and enforce a policy on that.\nI never worked with it before, so I had no clue at all what to expect from it. Mattijs told me how to configure it (see below for an example) and offered to slave his RPZ policy-domains.\nAll of a sudden I was no longer getting a lot of ADS/SPAM and other things. It was filtered. Wow!\nHis RPZ zones were custom made and based on PiHole, where PiHole adds hosts to the local \u0026ldquo;hosts\u0026rdquo; file and sends it to 127.0.0.1 (your local machine), which prevents it to reach the actual server at all, RPZ policies are much stronger and more dynamic.\nRPZ policies offer the use of \u0026ldquo;redirecting\u0026rdquo; queries. What do I mean with that? well you can force a ADVERTISEMENT (AD for short) site / domain to the RPZ policy and return a NXDOMAIN. It no longer exists for the end-user. But you can also CNAME it to a domain/host you own and then add a webserver to that host and tell the user query\u0026rsquo;ing the page: \u0026ldquo;The site you are trying to reach had been pro-actively blocked by the DNS software. This is an automated action and an automated response. If you feel that this is not appropriate, please let us know on \u0026rdquo;, or something like that.\nOnce I noticed that and saw the value, I immediately saw the benefit for companies and most likely schools and home people. Mattijs had a busy time at work and I was recovering from health issues, so I had \u0026ldquo;plenty\u0026rdquo; of time to investigate and read on this. The RPZ policies where not updated a lot and caused some problems for my ereaders for example (msftcncsi.com was used by them, see another post on this website for being grumpy about that). And I wanted to learn more about it. So what did I do?\nYes, I wrote my own parser. In perl. I wrote a \u0026ldquo;rpz-generator\u0026rdquo; (its actually called like that). I added the sources Mattijs used and generated my own files. They are rather huge, since I blocked ads, malware, fraud, exploits, windows stuff and various other things (gambling, fakenews, and stuff like that).\nI also included some whitelists, because msfctinc was added to the lists and it made my ereaders go beserk, and we play a few games here and there which uses some advertisement sites, so we wanted to exempt them as well. It\u0026rsquo;s better to know which ones they are and selectively allow them, then having traffic to every data collector out there.\nThis works rather well. I do not get a lot of complaints that things are not working. I do see a lot of queries going to \u0026ldquo;banned\u0026rdquo; sites everyday. So it is doing something .The most obvious one is that search results on google, not always are clickable. The ones that have those [ADV] sites, are blocked because they are advertising google sponsored sites, and they are on the list.. and google-analytics etc. It doesn\u0026rsquo;t cause much harm to our internet surfing or use experience, with the exception of the ADV sites I just mentioned. My wife sometimes wants to click on those because she searches for something that happends to be on that list, but apart from that we are doing just fine.\nOne thing though, I wrote my setup and this article with my setup using \u0026ldquo;NXDOMAIN\u0026rdquo; which just gives back \u0026ldquo;site does not exist\u0026rdquo; messages. I want to make my script more smart by making it a selectable, so that some categories are CNAMED to a filtering domain and webpage, and some are NXDOMAIN\u0026rsquo;ed. If someone has experience with that, please show me some idea\u0026rsquo;s and how that looks like and whether your end-users can do something with it or not. I think schools will be happy to present a block-page instead of NXdomain\u0026rsquo;ing some sites 🙂\nAcknowledgements: Mattijs for teaching and showing me RPZ, ISC for placing RPZ in NAMED, and zytrax.com for having such excellent documentation to RPZ. The perl developers for having such a great tool around, and the various sites I use to get the blocklists from. Thank you all!\nIf you want to know more about the tool, please contact me and we can share whatever information is available 🙂\n","permalink":"https://www.evilcoder.org/2018/02/05/reponse-zones-in-bind-rpz-blocking-unwanted-traffic/","summary":"\u003cp\u003eA while ago, my dear colleague Mattijs came with an interesting option in BIND. Response zones. One can create custom \u0026ldquo;zones\u0026rdquo; and enforce a policy on that.\u003c/p\u003e\n\u003cp\u003eI never worked with it before, so I had no clue at all what to expect from it. Mattijs told me how to configure it (see below for an example) and offered to slave his RPZ policy-domains.\u003c/p\u003e\n\u003cp\u003eAll of a sudden I was no longer getting a lot of ADS/SPAM and other things. It was filtered. Wow!\u003c/p\u003e","title":"Reponse zones in BIND (RPZ/Blocking unwanted traffic)."},{"content":"If you go looking for a usable webmail application, then you might end up with Open-Xchange (OX for short). Some larger ISP\u0026rsquo;s are using OX as their webmail application for customers. It has a multitude of options available, using multiple email accounts, caldav/carddav included (not externally (yet?)) etc. There are commercial options available for these ISP\u0026rsquo;s, but also for smaller resellers etc.\nBut, there is also the community edition available. Which is the installation you can run for free on your machine(s). It does not have some of the fancy modules that large setups need and require, and some updates might follow a bit later which are more directly delivered to paying customers, but it is very complete and usable.\nI decided to setup this for my private clients who like to use a webmail client to access their email. At first I ran this on a VM using Bhyve on FreeBSD. The VM ran on CentOS6 and had the necessary bits installed for the OX setup (see: https://oxpedia.org/wiki/index.php?title=AppSuite:Open-Xchange_Installation_Guide_for_CentOS_6). I modified the files I needed to change to get this going, and there, it just worked. But, running on a VM, with ofcourse limited CPU and Memory power assigned (There is always a cap) and it being emulated, I was not very happy with it. I needed to maintain an additional installation and update it, while I have this perfectly fine FreeBSD server instead. (Note that I am not against using bhyve at all, it works very well, but I wanted to reduce my maintenance base a bit :-)).\nSo a few days ago I considered just moving the stuff over to the FreeBSD host instead. And actually it was rather trivial to do with the working setup on CentOS.\nAt this moment I do not see an easy way to get the source/components directly from within FreeBSD. I have asked OX for help on this, so that we can perhaps get this sorted out and perhaps even make a Port/pkg out of this for use with FreeBSD.\nThe required host changes and software installation The first thing that I did was to create a zfs dataset for /opt. The software is normally installed there, and in this case I wanted to have a contained location which I can snapshot, delete, etc, without affecting much of the normal system. I copied over the /opt/open-xchange directory from my CentOS installation. I looked at the installation on CentOS and noticed that it used a specific user \u0026lsquo;open-xchange\u0026rsquo;, which I created on my FreeBSD host. I changed the files to be owned by this user. Getting a process listing on the CentOS machine also revealed that it needed Java/JDK. So I installed the openjdk8 pkg (\u0026lsquo;\u0026lsquo;pkg install openjdk8\u0026rsquo;\u0026rsquo;). The setup did not yet start, there were errors about /bin/bash missing. Obviously that required installing bash (\u0026lsquo;\u0026lsquo;pkg install bash\u0026rsquo;\u0026rsquo;) and you can go with two ways, you can alter every shebang (#!) to match /usr/local/bin/bash (or better yet #!/usr/bin/env bash), or you can symlink /usr/local/bin/bash to /bin/bash, which is what I did (I asked OX to make it more portable by using the env variant instead).\nThe /var/log/open-xchange directory does not normally exist, so I created that and made sure that \u0026lsquo;\u0026lsquo;open-xchange\u0026rsquo;\u0026rsquo; could write to that. (mkdir /var/log/open-xchange \u0026amp;\u0026amp; chown open-xchange /var/log/open-xchange).\nI was able to startup the /opt/open-xchange/sbin/open-xchange process with that. I could not yet easily reach it, on the CentOS installation there are two files in the Apache configuration that needed some attention on my FreeBSD host. The Apache include files: ox.conf and proxy_http.conf will give away hints about what to change. In my case I needed to do the redirect on the Vhost that runs OX (RedirectMatch ^/$ /appsuite/) and make sure the /var/www/html/appsuite directory is copied over from the CentOS installation as well. You can stick it in any location, as long as you can reach it with your webuser and Alias it to the proper directory and setup directory access).\nApache configuration (Reverse proxy mode) The proxy_http.conf file is more interesting, it includes the reverse proxy settings to be able to connect to the java instance of OX and service your clients. I needed to add a few modules in Apache so that it could work, I already had several proxy modules enabled for different reasons, so the list below can probably be trimmed a bit to the exact modules needed, but since this works for me, I might as well just show you;\nLoadModule slotmem_shm_module libexec/apache24/mod_slotmem_shm.so\nLoadModule deflate_module libexec/apache24/mod_deflate.so\nLoadModule expires_module libexec/apache24/mod_expires.so\nLoadModule proxy_module libexec/apache24/mod_proxy.so\nLoadModule proxy_connect_module libexec/apache24/mod_proxy_connect.so\nLoadModule proxy_http_module libexec/apache24/mod_proxy_http.so\nLoadModule proxy_scgi_module libexec/apache24/mod_proxy_scgi.so\nLoadModule proxy_wstunnel_module libexec/apache24/mod_proxy_wstunnel.so\nLoadModule proxy_ajp_module libexec/apache24/mod_proxy_ajp.so\nLoadModule proxy_balancer_module libexec/apache24/mod_proxy_balancer.so\nLoadModule lbmethod_byrequests_module libexec/apache24/mod_lbmethod_byrequests.so\nLoadModule lbmethod_bytraffic_module libexec/apache24/mod_lbmethod_bytraffic.so\nLoadModule lbmethod_bybusyness_module libexec/apache24/mod_lbmethod_bybusyness.so After that it was running fine for me. My users can login to the application and the local directory\u0026rsquo;s are being used instead of the VM which ran it first. If you notice previous documentation on this subject, you will notice that there are more third party packages needed at that time. It could easily be that there are more modules needed than that I wrote about. My setup was not clean, the host already runs several websites (one of them being this one) and ofcourse support packages were already installed.\nUpdating is currently NOT possible. The CentOS installation requires running \u0026lsquo;\u0026lsquo;yum update\u0026rsquo;\u0026rsquo; periodically, but that is obviously not possible on FreeBSD. The packages used within CentOS are not directly usable for FreeBSD. I have asked OX to provide the various Community base and optional modules as .tar.gz files (raw) so that we can fetch them and install them on the proper location(s). As long as the .js/.jar files etc are all there and the scripts are modified to start, it will just work. I have not (yet) created a startup script for this yet. For the moment I will just start the VM and see whether there are updates and copy them over instead. Since I did not need to do additional changing on the main host, it is a very easy and straight forward process in this case.\nSupport There is no support for OX on FreeBSD. Ofcourse I would like to see at least some support to promote my favorite OS more, but that is a financial situation. It might not cost a lot to deliver the .tar.gz files so that we can package them and spread the usage of OX on more installations (and thus perhaps add revenue for OX as commercial installation), but it will cost FTE\u0026rsquo;s to support more then that. If you see a commercial opportunity, please let them know so that this might be more and more realistic.\nThe documentation written above is just how I have setup the installation and I wanted to share it with you. I do not offer support on it, but ofcourse I am willing to answer questions you might have about the setup etc. I did not include the vhost configuration in it\u0026rsquo;s entirely, if that is a popular request, I will add it to this post.\nOpen Questions to OX So as mentioned I have questioned OX for some choices:\nPlease use a more portable path for the Bash shell (#!/usr/bin/env bash) Please allow the use of a different localbase (/usr/local/open-xchange for example) Please allow FreeBSD packagers to fetch a \u0026ldquo;clean\u0026rdquo; .tar.gz, so that we can package this for OX and distribute it for our end-users. Unrelated to the post above: Please allow the usage of external caldav/carddav providers Edit:\nI have found another thing that I needed to change. I needed to use gsed (Gnu-sed) instead of FreeBSD-sed so that the listuser scripts work. Linux does that a bit differently but if you replace sed with gsed those scripts will work fine.\nI have not yet got some feedback from OX.\n","permalink":"https://www.evilcoder.org/2017/08/29/freebsd-using-open-xchange-on-freebsd/","summary":"\u003cp\u003eIf you go looking for a usable webmail application, then you might end up with Open-Xchange (OX for short). Some larger ISP\u0026rsquo;s are using OX as their webmail application for customers. It has a multitude of options available, using multiple email accounts, caldav/carddav included (not externally (yet?)) etc. There are commercial options available for these ISP\u0026rsquo;s, but also for smaller resellers etc.\u003c/p\u003e\n\u003cp\u003eBut, there is also the community edition available. Which is the installation you can run for free on your machine(s). It does not have some of the fancy modules that large setups need and require, and some updates might follow a bit later which are more directly delivered to paying customers, but it is very complete and usable.\u003c/p\u003e","title":"FreeBSD: Using Open-Xchange on FreeBSD"},{"content":"For many System Administrators that have public facing Mailservers, it is an ongoing battle.. SPAM. Since there is money to make, it will never ever go away, but we can try to mitigate this.\nIntroduction on my usage of anti-spam products For many moons I have used the SpamAssassin product in various forms, simply as a client to check every email on delivery, as daemon where multiple servers check one instance, as part of MailScanner where a single (replicated) database was responsible for storing all bits and pieces combined with local additional rules. This worked fine for years, but, our external MX servers are not the most powerful machines in the world. We need to be selective on what we load on them. And the ever increasing spam battle just makes sure that your memory and processing power is going faster then the system(s) could continuously deliver.More rules, more Anti-Virus, more regular expressions, more downloading, parsing and re2c’ing files that gets harder and harder for the systems every time the amount of rules etc increases.\nI already mentioned that this worked fine for years. I switched to MailScanner for our MX’es not too long ago, and I am happy with that, except that it takes additional load on the machines, and will only judge about mails when they are already in. I contributed to MailScanner and specifically to the MailWatch project for reasons of LDAP authentication and more of those things, where I found space to improve. Even though I like the system very much, it is not how I want to prevent Spam from coming in. It might be a good fit for you though, it offers a quarantine where users can selectively release emails and mark them as spam and such and you can generate emails that send the amount of potentially missed emails and a link to them etc. Some of our users where happy with that as well, and so was I.\nLimitations of our handling of email But, resources were becoming a problem. Yes I can upgrade my external MX’es ofcourse and load them with more memory and CPU power, but that costs money. Money that is hard earned in the hosting world, because there is plenty to choose from, even if we give the best prices around, it still takes multiple additional customers to warrant the higher bills (that is not taking into account that profit would be fun for additional investments in the company so that our users can get even better products).\nSo, given the saturated market, I was not going to spend additional money on our machines just yet. Another thing is that I wanted to prevent spam from coming into the machine in the first place, so reject them at the border where possible, so I do not have to cater them. (See it as border patrol, it’s easier to prevent things coming in, then to handle them once they are in). I noticed that several email servers where already doing that when we forward mail for our domains to lets say gmail or other companies that people are happy to use. Those servers, like gmail, either rate limit you or they just deny the emails before you are able to send them. Leaving you with the problems instead of the gmail user itself. Magnificent. But how does that work? for Postfix, which I use that means using a milter, specifically in this case rmilter, which binds into the product on the SMTP level, checks signatures stored, scans the content and verifies with bayes and a neural network whether this is OK or not, and then either rejects it before processing it, greylisting it when it seems spammy or adds an header to the message and forwards it to it’s final destination. If we are the final destination, then the header is taken into account and the message is automatically put in the Spam folder, or for gmail/hotmail users this is the ‘unwanted email’ folder or whatever it is called nowadays. I have put filters in place, that learn your behaviour, so if a message is put in the Spam folder and is not spam and you move it back to for example the INBOX, then the system learns that it should not mark it as spam and try to do better next time.\nThe product: rspamd But what product delivers that ? After talking with a postmaster team member of FreeBSD, I found out about rspamd, and that the author is a fellow-FreeBSD-committer as well. I implemented it (it took some time to learn the curve, but essentially it is rather easy, try it!). It has less load then the various spam assassin products and additional applications that support it (like mailscanner and mailwatch), it does not need a webserver by itself etc. So it reduced my memory footprint with around 400mb’s continuously of less memory usage. That is a whole lot of you have mb’s to spare instead of handing them out.\nHow does it globally work? I also configured rspamd to behave like the following;\nBoth our external MX’es have a local bayes-classifier and various other local databases. I used the suggested three database tier on the machine and I extended both machines to use stunnel to contact eachother over the stunnel to the remote database. I changed all configuration options to not only use “servers = “localhost”;“ but instead “servers = “localhost,localhost:26379”; and spreading that across every redis line I could find. I then restarted rspamd on both machines and noticed that there is a lot of things going on, it seems that everything is written and read on both machines. Using the webinterface, you’ll sometime get errors, not sure why that is, and history is not always consistent. but it’s for management purposes only so not very problematic in this case. Both MX’es are checking on their localhost, and “also_check” the remote machine over an internal private network that I have setup.\nOur internal machines that handle the delivery of the email, use both MX’es as rspamd instance as configured in rmilter. They do not handle anything themselves, except for Virus Scanning (which is also done on the MX but as well on the local machine, but only for email not received from the MX’es, like outgoing email). That means less overhead for those machines and only using the two machines where we know they are working. I also extended these machines to use redis on the MX’es instead of locally and configured them both in the configuration, again using stunnel. rmilter uses the redis databases to store and save messages that we have send and get replies and such. In the future if rspamd is by itself capable of handling this, rmilter will be taken out and only rspamd will run like mentioned.\nLearning spam/ham messages For now this seems to work very well, I have implemented a dovecot script that triggers when someone moves a message from spam to inbox (‘learn-ham.sh’) and from inbox or other mailboxes to the spambox (‘learn-spam.sh’).\nThe contents of the files look like the follwing, where learn_spam and learn_ham are in the appropriate places ofcourse.\n\u0026gt; #!/bin/sh \u0026gt; \u0026gt; data=$(cat) \u0026gt; \u0026gt; echo \u0026amp;#8220;$data\u0026amp;#8221; | /usr/local/bin/rspamc -h MX1 -P \u0026lt;secret password for MX1\u0026gt; learn_spam \u0026gt; \u0026gt; echo \u0026amp;#8220;$data\u0026amp;#8221; | /usr/local/bin/rspamc -h MX2 -P \u0026lt;secret password for MX2\u0026gt; learn_spam Ofcourse it takes additional understanding of how emails work, how your environment works and what is acceptable or not. On the course of just a few days we processed more then 10k of emails (yes there are many providers doing more emails, everyone has it’s own perks ;-)). and we have learned more then 60 emails in just a day after enabling users to do their own training.\nOne note A little note about the rejecting of spam, we only reject spam when the message is really spammy and cannot be easily something else. Most emails that I saw so far are forwarded with an additional header instead of being rejected and the emails that are rejected are really spam. Users will never ever see them, which is good enough for my environment but might be something different for your environment. Please dry-run it at first to see how it matches your environment.\nReferences The script for learning spam under dovecot comes from: https://kaworu.ch/blog/2014/03/25/dovecot-antispam-with-rspamd/ user Alex.\nThe documentation I used for rspamd comes from https://www.rspamd.com itself.\nThe sieve filters that I use for dovecot are from Dovecot itself https://wiki2.dovecot.org/HowTo/AntispamWithSieve\nCustom blacklisting of domains and such come from: https://gist.github.com/kvaps/25507a87dc287e6a620e1eec2d60ebc1\n","permalink":"https://www.evilcoder.org/2017/04/08/the-epic-spam-battle-from-spamassassin-10-year-user-to-rspamd/","summary":"\u003cp\u003eFor many System Administrators that have public facing Mailservers, it is an ongoing battle.. SPAM. Since there is money to make, it will never ever go away, but we can try to mitigate this.\u003c/p\u003e\n\u003ch2 id=\"introduction-on-my-usage-of-anti-spam-products\"\u003eIntroduction on my usage of anti-spam products\u003c/h2\u003e\n\u003cp\u003eFor many moons I have used the SpamAssassin product in various forms, simply as a client to check every email on delivery, as daemon where multiple servers check one instance, as part of MailScanner where a single (replicated) database was responsible for storing all bits and pieces combined with local additional rules. This worked fine for years, but, our external MX servers are not the most powerful machines in the world. We need to be selective on what we load on them. And the ever increasing spam battle just makes sure that your memory and processing power is going faster then the system(s) could continuously deliver.More rules, more Anti-Virus, more regular expressions, more downloading, parsing and re2c’ing files that gets harder and harder for the systems every time the amount of rules etc increases.\u003c/p\u003e","title":"The epic spam battle from SpamAssassin (10 + year user) to rspamd."},{"content":"So. It had been a while before I had proper time to look into the Dutch translation efforts again.\nHistory Due to various reasons not discussed here, I was not able to see to a proper translation. Rene did a lot of work (thank you for that Rene!).\nThe PO system First of all, i am going to discuss a bit about the PO system, which is a gettext way of doing translations. It chops texts into msgstr’s (message strings) and then translates those strings using msgid’s. Same lines are translated the same, this might be a good option, unless the context changed between the lines and then you might get ‘google translate’ kind of ways.\nBack to the story…\nAfter getting time again to see this through I noticed that we started using the “PO” system, using gettext. Our handbook (for example) is now translated into one huge book.xml file which is then cut into msgstr’s that can be translated to msgid’s. For this I use the poedit application (the PRO version) so that I have counters and translation suggestions from the online Translation Memory(TM) that we all develop. I also contribute the FreeBSD translations back to the TM so that everyone can profit from it.\nI am now first synchronising the Glossary because that didn’t change much with the current online translation and working my way back to what had been translated already and translating the missing bits and pieces in between. Mike (co worker at Snow) also did a tremendous job in getting this into better shape the last year which had not yet been merged back to the online variants because it was not yet complete. I can use that information though to generate a manual handbook variant of that version and use that to even further use the current translation effort into the gettext/po system.\nBiting the bullet As one of the first translation teams to use this, I expect to hit some rocks on the road. For example, there are lines that do not need translation, mailing list names are the same in every language, perhaps the description changes but not the ‘realnames’. Same goes for my entity (\u0026amp;a.remko) which does not change, nor my PGPkey. And if those things change, they require changing over all translation efforts as well as the original english version. We are looking into a way to ‘ignore’ them for the po system but include them when building. So that pgpkeys and such are always up to date.\nI also had been discussing this with Vaclav the developer of poedit, and he mentioned that it does not matter much, because when a line changes and you update the po, those lines will be invalidated and need ‘retranslation’ for the entire string. So that all gets us in interesting situations that we did not encounter before. I am biting the bullet myself after we have discussed this a few years ago and I hope that the entire project can benefit from that.\nAlternative options, pre-translate, merge current translations automatically? And yes, a valid question would be, cannot you merge the current translated information into the po system automatically. If every word was on the exact same spot and line, yes this might be an option. Sadly because of grammer and different wording (longer/shorter) this changes rapidly from line 1 already and is thus not easily done. If you have suggestions however, we are always willing to listen. Please join us on translators@FreeBSD.org so that we can discuss those things better :-).\n","permalink":"https://www.evilcoder.org/2017/03/22/freebsd-dutch-documentation-project/","summary":"\u003cp\u003eSo. It had been a while before I had proper time to look into the Dutch translation efforts again.\u003c/p\u003e\n\u003ch2 id=\"history\"\u003eHistory\u003c/h2\u003e\n\u003cp\u003eDue to various reasons not discussed here, I was not able to see to a proper translation. Rene did a lot of work (thank you for that Rene!).\u003c/p\u003e\n\u003ch2 id=\"the-po-system\"\u003eThe PO system\u003c/h2\u003e\n\u003cp\u003eFirst of all, i am going to discuss a bit about the PO system, which is a gettext way of doing translations. It chops texts into msgstr’s (message strings) and then translates those strings using msgid’s. Same lines are translated the same, this might be a good option, unless the context changed between the lines and then you might get ‘google translate’ kind of ways.\u003c/p\u003e","title":"FreeBSD Dutch Documentation Project"},{"content":"So I have this situation, where I couldn’t get my kobo reader to connect to the internet and fetch updates and/or use kobo+ for example.\nI started debugging with Ubiquiti ages ago to see where the problem lies. In the meantime I was unable to continue with this, but I had an interesting thought yesterday. I sniffed the traffic from the hardware (mac) address of the ereader and noticed that it tried to resolve: http://www.msftncsi.com and fetch /ncsi.txt. The site is a microsoft network connection information page that informs microsoft systems whether or not an active internet connection is seen.\nSomehow it seems that Kobo is also using that for it’s android based readers as well. Without it, the network connection just disconnects and does nothing. That is somewhat upsetting because the device is just perfectly able to connect to the network(s) and has relative free internet access. One thing is that I filter on DNS responses and exclude known malware/spam hosts and analytics sites like google. This reduces the amount of advertorials on the internet and bogus trackers. It seems that msftncsi.com is also on that list and thus gets an NXDOMAIN when querying for it.\nI do not entirely understand why an ereader would need this kind of information before being able to connect to the internet. The device should associate with a WiFi access point and get an address and the like. Whether or not that gives continued access to the internet is something that is a next step. So instead of giving up, it could just mark the WiFi symbol with an exclamation mark (!) to report that something might not work and/or just try to connect to the kobo internet environment. That would be more common use of the internet then depending on an internet file which might be blocked (such as in my case).\nFor now I changed my caching mikrotik’s to include msftncsi.com as a static entry and point that to my webserver and service the file instead. That makes sure the Kobo can connect to the environment and gives me full access over that file instead of some bogus remote site that might do nasty things (without me knowing).\nOfcourse I asked (nice and polite) Kobo to change this interesting behaviour.\n","permalink":"https://www.evilcoder.org/2017/03/22/kobo-readers-using-the-internet/","summary":"\u003cp\u003eSo I have this situation, where I couldn’t get my kobo reader to connect to the internet and fetch updates and/or use kobo+ for example.\u003c/p\u003e\n\u003cp\u003eI started debugging with Ubiquiti ages ago to see where the problem lies. In the meantime I was unable to continue with this, but I had an interesting thought yesterday. I sniffed the traffic from the hardware (mac) address of the ereader and noticed that it tried to resolve: \u003ca href=\"http://www.msftncsi.com\"\u003ehttp://www.msftncsi.com\u003c/a\u003e and fetch /ncsi.txt. The site is a microsoft network connection information page that informs microsoft systems whether or not an active internet connection is seen.\u003c/p\u003e","title":"Kobo readers using the internet"},{"content":"After ‘relaunching’ my Blog I have been occupied with other activities. So I just took a little time to say “Happy 2017” to all of you. Perhaps there will be more entries this upcoming year.. 🙂\n","permalink":"https://www.evilcoder.org/2017/01/04/happy-2017/","summary":"\u003cp\u003eAfter ‘relaunching’ my Blog I have been occupied with other activities. So I just took a little time to say “Happy 2017” to all of you. Perhaps there will be more entries this upcoming year.. 🙂\u003c/p\u003e","title":"Happy 2017!"},{"content":"It took a gentle while to get the blog back up and running. I first considered cleaning out the original blog, but that would have taken a lot of time and effort. So instead I just vaporised the old blog (well, not really, but the interwebs can no longer access it), and decided to rebuild the website. Please feel welcome here, if I feel up for it, I might convert a few older blog entries from the old blog to this new one. Do not expect periodic updates, they will not happen probably.\n","permalink":"https://www.evilcoder.org/2016/11/22/reorganised-and-back-online/","summary":"\u003cp\u003eIt took a gentle while to get the blog back up and running. I first considered\ncleaning out the original blog, but that would have taken a lot of time and\neffort. So instead I just vaporised the old blog (well, not really, but the\ninterwebs can no longer access it), and decided to rebuild the website. Please\nfeel welcome here, if I feel up for it, I might convert a few older blog\nentries from the old blog to this new one. Do not expect periodic updates,\nthey will not happen probably.\u003c/p\u003e","title":"Reorganised and back online"},{"content":" Name Remko Lodder\nFunction title DevOps (People) Manager \u0026amp; Senior IT Infrastructure Engineer\nSummary Experienced IT Leader and Engineer with over 20 years of experience in infrastructure, automation and DevOps. Combines technical expertise with people management and delivers scalable, secure and automated IT solutions. Proven track record in improving critical IT environments, leading of teams, implementation of CI/CD, automation and virtualization solutions.\nCore skills DevOps \u0026amp; CI/CD (Azure DevOps, pipelines) Infrastructure as Code (Ansible) Virtualization (VMware/Proxmox) Team leadership \u0026amp; coaching IT operations \u0026amp; incident management Security \u0026amp; compliance Disaster recovery \u0026amp; business continuity Working Experience Chapterlead (Managing) Engineer Virtualization \u0026amp; Storage ING Bank 2021 - current\nLead and coach a group of 12 engineers, including performance management and recruitment. Focus on wellbeing and development of employees Implementation of CI/CD pipelines reducing manual operations to a bare minimum. Used to install 400 hosts and migrate 800 hosts, without incidents. Speeding up the installations of at least 50% error-free Design and maintain scalable infrastructure based on VMware and Azure (30.000 vm\u0026rsquo;s and 1200+ servers) Design and implement the automation of operational processes with Ansible, automating the commissioning of hardware from an empty host to a fully-deployed virtualization environment, reducing manual labor by at least 50% Early VCF Adopter, utilizing previously written automation and refactoring of the code to utilize the VCF API\u0026rsquo;s directly. Knowledge sharing ING wide to get synergy on automation, reducing the learning curve with months. Leading disaster recovery testing and improving recovery procedures within the team with a 100% success rate Responsible for the stability, security and availability of critical systems Setting the standards in the tribe for automation / Ansible Important achievements:\nContinuous improvement of tribe and employees, reducing employee turnover and improving happiness in the team Significant reduction of deployment times through Ansible/API automation by at least 50% Improved collaboration between various teams, ING-wide Field manager \u0026amp; IT Infrastructure Engineer Snow B.V/Sue B.V. 2006-2021\nCoaching of 35+ technical experts in the field, optimizing their efficiency and wellbeing Maintain and optimize wide range of IT Infrastructure Implementation of networking, security, unix, and virtualization concepts Public speaking and author of multiple magazine printed articles Supporting operations and ITIL processes Firewall \u0026amp; Security Engineer ING Bank NV. 2001-2006\nMaintenance, design and implementation of firewall environments Maintenance, design and implementation of Unix environments Functional steering of operational team Scripting various tools to reduce manual labour Certifications NLP Practitioner Solution Focussed Coaching CISSP CCNA + CCNP RHCSA + RHCE Various other IT and security certifications Education Solution focussed coaching - Centrum voor Conflicthantering NLP Practitioner - BPD Training VCF9 - Build Manage \u0026amp; Secure VCF5 - Install Configure Manage VCP8 - Install Configure Manage LPI1 + LPI2 Many Udemy trainings on programming, automation, wellbeing, etc. Many broadcom trainings on VCF, VCAP, etc. Other information Strong in combining technical expertise and leadership Strong experience in enterprise environments Strong in automation, scalability and reliability Contact remko@elvandar.org | LinkedIn | Personal website\n","permalink":"https://www.evilcoder.org/resume/","summary":"\u003chr\u003e\n\u003ch2 id=\"name\"\u003eName\u003c/h2\u003e\n\u003cp\u003eRemko Lodder\u003c/p\u003e\n\u003ch2 id=\"function-title\"\u003eFunction title\u003c/h2\u003e\n\u003cp\u003eDevOps (People) Manager \u0026amp; Senior IT Infrastructure Engineer\u003c/p\u003e\n\u003ch2 id=\"summary\"\u003eSummary\u003c/h2\u003e\n\u003cp\u003eExperienced IT Leader and Engineer with over 20 years of experience in infrastructure, automation and DevOps. Combines technical expertise\nwith people management and delivers scalable, secure and automated IT solutions. Proven track record in improving critical IT environments,\nleading of teams, implementation of CI/CD, automation and virtualization solutions.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"core-skills\"\u003eCore skills\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDevOps \u0026amp; CI/CD (Azure DevOps, pipelines)\u003c/li\u003e\n\u003cli\u003eInfrastructure as Code (Ansible)\u003c/li\u003e\n\u003cli\u003eVirtualization (VMware/Proxmox)\u003c/li\u003e\n\u003cli\u003eTeam leadership \u0026amp; coaching\u003c/li\u003e\n\u003cli\u003eIT operations \u0026amp; incident management\u003c/li\u003e\n\u003cli\u003eSecurity \u0026amp; compliance\u003c/li\u003e\n\u003cli\u003eDisaster recovery \u0026amp; business continuity\u003c/li\u003e\n\u003c/ul\u003e\n\u003chr\u003e\n\u003ch2 id=\"working-experience\"\u003eWorking Experience\u003c/h2\u003e\n\u003ch3 id=\"chapterlead-managing-engineer-virtualization--storage\"\u003eChapterlead (Managing) Engineer Virtualization \u0026amp; Storage\u003c/h3\u003e\n\u003cp\u003eING Bank\n2021 - current\u003c/p\u003e","title":"Resume"},{"content":"Back in 2003 I wrote the first bits of the site that you are visiting now. As an homage to that time I added a screenshot from the internet archive for future reference. This will be back at some point.\n","permalink":"https://www.evilcoder.org/2003/01/09/about-my-blogs/","summary":"\u003cp\u003eBack in 2003 I wrote the first bits of the site that you are visiting now. As an homage to that time I added a screenshot from the internet archive for future reference.\nThis will be back at some point.\u003c/p\u003e","title":"About my blogs"}]