Teknik
Välkommen till vår teknikblogg! Håll dig uppdaterad, följ oss på Twitter
Välkommen till vår teknikblogg! Håll dig uppdaterad, följ oss på Twitter
In part 1 of this two part blog, we developed a simple webcomponent using only plain javascript, css and html. In this second part we will explore how the stencil.js toolchain can help us author components and ease integration in some of today’s popular frameworks.
The initial W3C Web Components specification draft was introduced way back in 2011. Every now and then over the years I’ve read articles and blog posts about the progress, but it’s only recently that v1 of the spec has been adopted by the major browser vendors. In the meantime, popular frontend libraries and frameworks like React, Vue, and Angular have created their own separate ways of creating components. For me, this raises a few questions:
In this two part blog series I will try to answer these questions by creating sample components using different techniques and subsequently integrating them in some popular frameworks. First we will go through a quick rundown of some basic concepts before moving on to explore stencil.js in the second part of the series.
In the last part, we implemented the Shared Database with Discriminator Column pattern usign Hibernate Filters. We observed that it will scale well, but the data isolation guarantee is troublesome due to shortcomings in the Hibernate Filter mechanism.
In this part, we will tweak the solution and redo the critical Filtering part using an advanced database mechanism: Row Level Security.
In part 1 of this two part blog I introduced a pattern for integration testing a Spring Boot application that publishes events via Kafka.
In part 2 I will discuss what happens when the test suite grows, look in depth at the Kafka Consumer, and offer one solution for how to reduce long execution times.
The source code for the example application can be found by following the link.
In the last part, we implemented the Schema-per-tenant pattern, and observed that it will scale better than the Database-per-tenant implementation. There will still most likely be an upper limit on the number of tenants it supports, caused by the Database Migrations that has to be applied to each tenant.
In this part, we will redo the solution and implement the Shared database with Discriminator Column pattern using Hibernate Filters and some AspectJ magic.
A successful continuous delivery (CD) pipeline requires a high level of automated testing. It is essential that tests are reliable to ensure that nothing unexpected slips into your production environment. Swift execution is also desirable to provide timely feedback to developers.
Testing asynchronous processes provide a different set of challenges from testing a synchronous request-response scenario. In this 2 part blog post I will investigate how to test an application that publishes events via Kafka. In part 1 I will demonstrate a method for getting started with integration testing and in part 2 I will look at how this can be made faster.
The scenario presented in these blog posts is inspired by a real-life case. The following link will take you to the source code for the example application.
In the last part, we implemented the Database-per-tenant pattern, and observed that it has limited scalability. In this part, we will tweak the solution and implement the Schema-per-tenant pattern in much the same way.
This is the second part of a two part series about Testcontainers. In this part I will among other things show you some tricks to get your Testcontainers start much faster.
This is the first part of a two-part series about Testcontainers. In this first part I will explain what it is, what problems it tries to solve, how it works and finally how you can use it in your own projects. In the second part we will see if we can reduce startup time for our Testcontainers.
In this part, we’ll implement the Database-per-tenant pattern using Hibernate out-of-the-box support for Multi Tenancy, with Database Migrations using Liquibase and support for dynamically adding new tenants.
In this part, we will outline an implementation strategy to encapsulate a Multi Tenant Data Access mechanism as a transparent, isolated Cross Cutting Concern with little or no impact on the application code. We will also introduce the notion of Database Schema Migration and explain why it is a critical part of a Multi Tenancy mechanism.
Multi Tenancy usually plays an important role in the business case for SAAS solutions. Spring Boot and Hibernate provide out-of-the-box support for different Multi-tenancy strategies. Configuration however becomes more complicated, and the available code examples are limited. In the first part of this blog series, we’ll start by exploring the Multi Tenancy concept and three different architectural patterns for multi tenant data isolation. In the forthcoming episodes, we’ll deep dive into the details of implementing the different patterns usign Spring Boot, Spring Data and Liquibase.
This is the second part of my blog series on reactive programming, providing an overview of Project Reactor, a reactive library based on the Reactive Streams specification. Part 1 covered an introduction to reactive programming.
Working in a continuous delivery environment can feel a little daunting - any changes you make will be rapidly delivered to your production environment. Although the intention is to provide immediate benefit for your customer, without proper risk management there is a real risk of exposing bugs and triggering outages. In this blog post I will look at one strategy that uses metrics to reduce those risks.
This is the second part of my mini-series on how I used the go profiling and built-in benchmarking tools to optimize a naive ray-tracer written in Go. For part 1, click here.
In this 2-part blog we’ll take a look at how I used the go profiling and built-in benchmarking tools to optimize a naive ray-tracer written in Go.
So, what is an ontology? In the broadest sense, an ontology is knowledge representation, symbolically encoded as to allow for computerized reasoning. Simplified: to use the terminology from the previous post, an ontology describes concepts and their relation to other concepts using a formalized language. This enables powerful computerized “thinking”, but creating a well-formed ontology is a big task.
Many are the situations where there’s a need to organize information for subsequent use, and one way to do this is to use a controlled vocabulary. This need may arise in a limited setting, where your requirements can be managed off the cuff, or it could arise in a setting where secondary use of information is foreseen but exactly how is unknown. In these more intricate circumstances, an ontology may serve you well.
RSocket is a new communication protocol that promises to solve the issues we have with HTTP, and together with that, it might also simplify the way we design and build distributed systems and microservices. I will come back to that last statement in a later blog post.
This blog series will serve as an introduction on how to build reactive web applications using Spring Boot, Project Reactor and WebFlux. It will be a beginners guide to the reactive world, but the reader is assumed to have previous knowledge of Java and Spring Boot.
Part one provides an overview of the different concepts behind reactive programming and its history. Part two serves as an introduction to Project Reactor with a lot of short code examples. The upcoming blog posts will cover WebFlux (Spring’s reactive web framework) and R2DBC (Reactive Relational Database Connectivity).