Klangism

Things that were important enough at the time when they were written.

Viktor Klang is a legendary programmer, known from places like the Internet. Consider following him on Twitter.

Mar 19

Microservices

Taking the opportunity to define Microservices, not in terms of what they are not, but in terms of what they are.

This is my definition.

Definition:


A Microservice is a discrete, isolated, and named piece of logic that consumes 0…N inputs and produces 0…N outputs and which is executed for the benefit of an invoker—it is performed as a service.


Discrete


The reason for calling it a Microservice instead of a Service is not because it has N lines of code where N is a small number, or that the size of the deployable is max N kilobytes or another arbitrary metric. The reason is that it has a small number of responsibilities: one. It does one thing, and does it well.


Isolation


Isolation means that it operates separately from other things, which makes it an inherently distributed entity—it does no longer matter on which physical hardware it is located or in which OS process it runs. It also fails in isolation—cascading failures can as such be avoided. Furthermore, this also allows it to evolve and be upgraded separately.


Named


Since Microservices are isolated we need a way to locate and refer to them. The name in combination with any types of the inputs and outputs of the Microservices can be said to form the signature or identifier of the Microservice.

Inputs & Outputs


Given a language that supports type information, the inputs and outputs can be typed, which means that those types can be reflected in the handle. This means that the invoker can know what a Microservice consumes and produces.


Execution


Since it is not the invoker that executes the logic of the Microservice and Microservices are naturally distributed, invocations need to avoid blocking the invoker while the invocation is being performed—the invoker must not be held hostage by the Microservice until it has produced its outputs. This means that an abstraction that lets us receive the outputs of the Microservice eventually is needed. In other words, invocations are asynchronous.



Next post: Microservice Architectures—"what is needed to create an architecture composed of Microservices"

 


Nov 23

Serializable dynamic Proxies

So, I found myself in a situation where I was creating JDK Proxies, and I was in control of the invocation of newProxyInstance and needed to make the Proxies transparently Serializable.

The problem is that for it to work you need writeReplace, so that you’re in control of the rematerialization of the Proxy on the deserializing side using readResolve.

However, the problem is that you need to make sure that the Proxy gets a writeReplace method, and you don’t want to force the guy providing you with what interfaces to proxy to implement writeReplace himself, because at some point someone will forget it.

The solution might seem trivial, but the solution is the following:

Define a trait/interface with the writeReplace method (with correct signature)

Happy hAkking!

Cheers, √


Jul 1

DIY: Exhaustiveness checking on Scala Enums

Long time no blog,

I’m not much of a writer, so here’s the code!

Hardlink

Cheers,


Mar 6

Implementing PostStart in #Akka

This question comes up every now and then, and my answer has since the first time, been refined, so I thought I’d give the answer here.

"How do I run some initialization logic in the thread of the dispatcher and not the creator of the actor?"

I present to you, PostStart:

If you can’t see the code, click here

Easy?

Enjoy!


Feb 25

This is exactly what I want Akka to solve

"The fact is that everyone has scalability issues, no one can deal with their service going from zero to a few million users without revisiting almost every aspect of their design and architecture."

Quoted from : http://37signals.com/svn/archives2/dont_scale_99999_uptime_is_for_walmart.php


Dec 28

All #Actors in #Scala - compared

As a late Christmas present I bring this to you dear readers:

I recently invited David Pollak (of Lift fame), Jason Zaugg (of Scalaz fame), Philipp Haller (of Scala Actors fame) and myself to collaborate on a feature matrix that would allow people to see the similarities and differences between the Actor libraries on the Scala platform.

So I’d like to start out by thanking David, Jason, Runar, Jonas, Philipp, Martin and all the people involved in these great projects for taking the time to fill out the matrix.

Here’s a link to a PDF-version of the document, enjoy!

PS. I attempted to embed the comparison to this blog post, but the formatting simply goes haywire. DS.


Dec 27

√iktor needs your help!

I’ve got some ideas for conference/group talks that I have yet to assemble, but I thought I’d let the community have a vote in what they’d think be interesting.

Proposal 1:

Git things done

Approx time: 45 minutes

"This is a case-study on moving from CVS/SVN to Git I did at a previous employer. It’s about the challenges faced - cultural problems, technical problems, process problems and touches areas like: Code reviews, agile&lean (Scrum/Kanban), pragmatisism and corporate politics."

Proposal 2:

The Kay Way - True Object Orientation

Approx time: 30 minutes

"This talk explores OO using Actors, a model of concurrency that encapsulates Alan Kays original concept of object-orientation, showing different models of composition and inheritance, message-passing, asynchronous and synchronous messaging using the Akka framework"

Proposal 3:

Faster, Harder, Actor

Approx time: 25 minutes

"At the very core of Akka lies the Executor-Based-Event-Driven-Dispatcher, or EBEDD as we call it for short. This talk goes through its design and how it manages to provide the best default for Akka Actors in terms of performance and reliability and what configuration is possible to tune it for your workloads."

Proposal 4:

Scala4Java

Approx time: 20 minutes

"This talk is some tips and tricks to write Scala-code that can be easily consumed by Java code, with dos and don’ts and examples from Scala projects."

So what I need from you is to comment on this blog post what you’ find interesting of the above mentioned proposals, or if you have any other suggestions on what you’d like for me to present.

Thanks for helping me out!


Dec 8

Hardcore Pom

A fairly nice technique to speed up update- times in SBT is by using ModuleConfigurations. A ModuleConfiguration can be viewed as a filter on top of a repository, saying: "Only look in the repository if the artifact you’re looking for has the following group id."

Pattern:

class MyProject(info: ProjectInfo) extends DefaultProject(info) {

object Repositories {

  //Add repositories here, they won’t be used for artifact lookup since they’re inside the object

  lazy val ScalaToolsRepo = MavenRepository(“Scala Tools Repository”, “http://nexus.scala-tools.org/content/repositories/hosted”)

  //… add more repos here

}

//Here we add our ModuleConfigurations

lazy val scalaTestModuleConfig  = ModuleConfiguration(“org.scalatest”, ScalaToolsRepo) //equiv of saying: “When you look for something with group id “org.scalatest”, look n this repo, but not otherwise

//Here we add our dependencies

val scalatest = “org.scalatest” % “scalatest” % “1.2” % “test”

}

That’s the entire pattern, this leads to extremely fast “sbt update"s and you are in more control of what gets downloaded from where.

However, I noticed that there is a big caveat.

Unfortunately SBT doesn’t take ModuleConfigurations into consideration when generating poms during publish-local and publish, this leads to poms that are missing a lot if not all of the repositories needed for its dependencies if you use the technique above.

With a big list of dependencies and ModuleConfigurations this is what I got in my pom:

<repositories>
        <repository>
            <id>ScalaToolsReleasesRepo</id>
            <name>Scala Tools Releases Repo</name>
            <url>http://scala-tools.org/repo-releases/</url>
        </repository>

</repositories>

Not what I had expected, there should be quite a few repos in there…

So I hacked together a small chunk of Scala that will post-process the poms and add all repos in your ModuleConfigurations, and I’ve named it McPom (ModuleConfiguration into Pom).

And this is the same artifact after McPom is thrown into the game:

    <repositories>
        <repository>
            <id>ScalaToolsReleasesRepo</id>
            <name>Scala Tools Releases Repo</name>
            <url>http://scala-tools.org/repo-releases/</url>
        </repository>
        <repository>
            <id>javanetRepo</id>
            <name>java.net Repo</name>
            <url>http://download.java.net/maven/2/</url>
        </repository>
        <repository>
            <id>JBossRepo</id>
            <name>JBoss Repo</name>
            <url>http://repository.jboss.org/nexus/content/groups/public/</url>
        </repository>
        <repository>
            <id>public</id>
            <name>public</name>
            <url>http://repo1.maven.org/maven2/</url>
        </repository>
        <repository>
            <id>CodehausRepo</id>
            <name>Codehaus Repo</name>
            <url>http://repository.codehaus.org/</url>
        </repository>
        <repository>
            <id>GuiceyFruitRepo</id>
            <name>GuiceyFruit Repo</name>
            <url>http://guiceyfruit.googlecode.com/svn/repo/releases/</url>
        </repository>
        <repository>
            <id>DatabinderRepo</id>
            <name>Databinder Repo</name>
            <url>http://databinder.net/repo/</url>
        </repository>
        <repository>
            <id>GlassfishRepo</id>
            <name>Glassfish Repo</name>
            <url>http://download.java.net/maven/glassfish/</url>
        </repository>
    </repositories>

Woot! McPom saves the day!

The only thing you need to do is to add the McPom trait to your project file, and mix it into the projects that needs it, and add the following override to them:

override def pomPostProcess(node: Node): Node = mcPom(moduleConfigurations)(super.pomPostProcess(node))
That’s it!
McPom can be found here
Enjoy!

Nov 30

SBT-fu: Publish privately

Have you’ve ever had the problem that you need to publish some Jars where your colleagues need to find them, but they are your secret web-scale sauce Jars and no one on the outside should be able to see them?

And you don’t get any permission to put the jars on a network share, and you’re not worthy of your own build server, says mr Infrastructure Manager?

And you happen to be using SBT?

There’s this wonderful service out there called Dropbox which is a small “cloud” storage solution that gives you a stash where you can put your personal files, and you have an option to have files available to the public, but they ALSO give you the possibility to _share_ folder with your trusted accomplices.

Here’s what you need to do:

1) Download and install dropbox

2) Create a new folder in your Dropbox root folder and name it something like "M2"

3) Add the following to your SBT project file

override def managedStyle = ManagedStyle.Maven

 val publishTo = Resolver.file(“shared-repo”,

             Path.userHome / “Dropbox” / “M2” asFile)

    This tells SBTs “publish” action to place your artifacts in your "M2" folder in your dropbox root

4) Then add your awesome Dropbox M2 repo to the projects that needs to use the published artifacts

 val SharedRepo  = MavenRepository(“shared-repo”,
                                  (Path.userHome / “Dropbox” / “M2”).asURL.toString)

5) Log onto www.dropbox.com and go into the “Sharing”-tab and share the M2 folder with all your awesome ninja buddies

6) Do an SBT publish, this places your freshly baked artifacts in your dropbox, and after Dropbox has synched it’s going to be propagated to your awesome friends.

7) No, seriously, you’re all done.

Enjoy!


Nov 29

Dispatchers in Akka 1.0-RC1

Curious about what we’ve done?

A lot of work has gone into the different dispatchers through Akkas relatively short but intense life so far.

Here are some news for Akka 1.0-RC1:

1) There is no public start/stop lifecycle anymore, the dispatcher will be started when it gets its first actor and will be stopped when the last actor leaves

2) The dispatchers will re-start when they get another actor if they were previously stopped

3) The dispatchers will only stop if a new actor hasn’t registered within X millis after the last one leaves. This timeout is tunable by default in your akka.conf: “akka.actor.dispatcher-shutdown-timeout”

4) All dispatchers now share the same unit test, this means that can ensure they have the same behavior i.e. respecting the Actor Model.

5) Since Dispatchers can be restarted you never need to worry about when and how to stop your dispatchers, and how to handle re-initialization.

We have also removed the Reactor based Dispatchers since they had lousy performance and were never used.

Some more good news is that we have managed to improve the performance even further, with ExecutorBasedEventDrivenDispatcher closing in on the HawtDispatchers excellent non-blocking performance.

Happy hAkking!


Page 1 of 3