Posts

Creating a flux sync configuration referring a config map for substitution

 If you configure your k8s cluster with terraform one of the final steps is to install a fluxcd operator. The operator are responsible to manage the resources inside of the cluster. In this way a fully automatic creation process is possible. To install flux a terraform provider is available preparing a install and sync configuration which can be installed in k8s using the kubernetes/kubectl providers. In fluxcd itself it is possible to use substitution . This means you can use variables to prepare the k8s yaml files. This is useful for variable content like different cluster names and domain names for all stages. Or more specialised arn ids of created resources while creating the cluster in aws. To use substitution fluxcd offers to link the configuration to a config map. In the map a set of key values can be defined to replace variable values. The map can also be created with terraform and the kubectl provider. Do not store secrets in the config map. Try to use secrets instead. Applyi

Create Spring Boot Images in Jenkins/CI in K8s, Part 1

  Working out Jenkins/CI pipelines it is a common task to create Spring Boot Images. Doing this with kubernetes agents is a challenge since docker is no more available from inside of containers. The task is quiet simple, create a spring boot image with `mvn spring-boot:build-image` inside a kubernetes pod. By default this fails because spring try to execute 'buildpacks' but the needed docker service is not available. The docker service is needed to create the container image. With spring 2.6 and beneath it is not possible to use another service then docker. But it is possible to use buildpacks separately after creating the project jar file. As replacement of docker it is possible to use podman as service. podman is compatible with docker and also supports the docker.sock. Looking forward to spring boot 2.7 which supports also podman as alternative to docker I did not try to use 'kaniko'. Objectives of this post: Create a woking pod and script to create spring boot image

Sonatype Nexus fails with random "peer not authenticated" errors behind ingress

 After installing nexus3 server in a kubernetes environment in the first view everything looks good. Server is running an accessible from the world. Users and roles are defined and working. But already in the first hours of working with the repository there are problems. The trick was retry the failed activity and everything went ok. It turned out that heavy activities failed a lot of times and a lot of retries have to be done to finish the tasks. Annoying if a release deploy failed. You need to increase the version number every time. There were a few support in the internet for the problems most of the time handling SSL problems between maven and nexus. But it did not fix the problem. It looks like maven and the used http library wagon have got problems with pooled connections if using ingress (a test with a standalone installation did not show the problem)  - The 'deep' reason is not really clear for me, need to invest more time for it. The solution is to deny wagon to use po

Error Reporting and Forwarding

Overview Error reporting is most of the time underestimated because it is believed that it is clear how this works. But the real requirements are most of the time not clear and most of the time it is not working how it should. This article shows the needs and goals of good error reporting. It also shows how it is currently done and how it is related to logging. Specially in dedicated systems error reporting must be defined new. Reporting Why do we report errors? We want someone or something let to know that something went wrong.  This gives the first requirements for error reporting. We need to report whats wrong. Exact enough to understand the problem or to analyse it deeper.  To do this a static error message for the case is necessary and more dynamic data to describe the specific case. Also a stack trace should be delivered to show exactly the code line where the error occurs. The stack trace is not part of this contemplation because it depends on the underlying programming language

Create docker/container images in a Kubernetes Jenkins pipeline

 The challenge is quite simple: To deploy a pice of software to Kubernetes it's necessary to create the (docker) container image in a pipeline. But the solution was not found directly. The most articles found show how to use the docker socket or tcp port to create images but since docker is no more part of Kubernetes this will not work any more. The solution is to use ' kaniko '. Looks like the project is able to master image creation. 1) First install and configure Jenkins values.xml controller:   JCasC:     securityRealm: |-       local:         allowsSignup: false         enableCaptcha: false         users:         - id: "admin"           name: "Jenkins Admin"           password: "xxx"     authorizationStrategy: |-       loggedInUsersCanDoAnything:         allowAnonymousRead: false   ingress:        enabled: true        paths: []        apiVersion: "extensions/v1beta1"        hostName: jenkins.mycluster.de kubectl create namespace

[mhus lib] Implemented Bearer JWS tokens

 mhus now supports bearer tokens for authentication. I used the project jjwt to implement the tokens and added dependencies of the current version 0.11.2. The token is implemented to be used with apache shiro. A new service JwtProvider implements creation and reading of the tokens. The keys if not exists will be created and stored in the keychain. Private and public keys in separate key sources. In this way the public keys can be published to other nodes. A new interface BearerRealm must be used to mark realms with Bearer support. Using the interface a token can be created from the realm implementation. You should use the AccessUtil to create tokens. The authentication is already implemented in rest and micro calls. Via rest a node '/jwt_token' can be used to create a token - expires after one hour - and use it as authentication. To use jjwt with osgi I as forced to create a port project. First the 'feature' character of the bundles is not supported in the current felix

[mhus lib] Reorg in generation 7 nearly finished

 The generation 7 shows massive reorganisations and reimplementations. After renaming repository names (moved the 'cherry' prefix to 'mhus') the reorg is finished and the sources are ready for a more stable generation 8. Generation 8 will add new features to work with none OSGi services in cloud environments. Actual changes: cherry-reactiive -> mhus-reactive cherry-vault -> mhus-vault cherry-web -> mhus-web mhus-osgi-cluster -> mhus-cluster