terça-feira, 21 de março de 2017

Apache Mesos, Overview and Architecture

Apache Mesos is a cluster manager, or distributed kernel system and use the same principle than linux kernel.

It abstract CPU, memory, storage and other physical and virtual resources, like fault tolerance and elastic distribution.

The Mesos kernel run in all machines providing applications ( Hadoop, Spark, Kafka, Elasticsearch) with APIs to manage resource and scheduling to datacenter or cloud.

It has fault tolerance of master and agents using zookeeper.

Native suport container with Docker and others images AppC(Organisation for the App Container specification, including the schema and associated tooling)

Support isolation of CPU, memory, disk, ports, GPU.
HTTP APIs to develop new distributed applications, to operate the cluster and for monitoring.


Mesos consists of a master daemon that manages agent daemons running on each cluster node and mesos frameworks that perform tasks on those agents.

The master allows the sharing of resources (CPU, RAM, ...) in structures and decides how many resources to offer each structure according to a given organizational policy, such as fair sharing or strict priority.

To support a diverse set of policies, the master employs a modular architecture that facilitates the addition of new allocation modules through a plug-in mechanism. 

A framework running on top of the Mesos consists of two components:
  •  a scheduler that registers with the master to be offered as a resource
  •  an executor process that is launched on agent nodes to perform the structure tasks.

The master determines how much feature is made available.
The scheduler determines which resource will be available

The figure below shows an example of how a structure is scheduled to perform a task. 

  1. Agent 1 reports to the master that it has 4 CPUs and 4GB of free memory. The master then invokes the allocation policy module and talks to the framework 1 that can offer its resources because they are free.
  2. The scheduler framework warns the master that it has two tasks to run in the agent and needs 2CPU and 1GB of memory and in the other task it needs 1 CPU and 2 GB of Memory.

Scheduling algorithm (Multilevel queue scheduling)

This algorithm can be used in situations where processes are divided into different groups.
Example: the division between foreground processes and background processes.
These two types of processes have different response times and requirements so you can have a different scheduling.
It is very useful for shared memory problems.


Mesos provides mechanisms to reserve resources in specific Slaves.
Two types of Reservation:
  • Static Reservation
  • Dinamic Reservation (Default) 


  • Isolate a task from other running tasks.
  • Container tasks for running in resource-limited time environment.
  • Control individual task resources (eg CPU, memory) programmatically.
  • Run the software on a pre-packaged file system image, allowing it to run in different environments.

Types of containerizers

Mesos manages to work with different types of container besides Docker, but by default Less uses its own container
Container Type supported:
  • Composing
  • Docker
  • Mesos Composing containerizer
Is the possibility of working with Docker and Mesos Container at the same time.
You can launch an image of Docker as a Task, or as an Executor.

Mesos  container 

This container allows tasks to be performed by an isolated container array provided by the Mesos

Allows mesos to control Tasks at runtime without relying on other containers.
You can have control of OS operations like cgroups / namespace
Promises to have the latest container technologies
Enables control of Disk Usage Limit
Insulation can be customized by task
High-Availability Mode

If the Master becomes unavailable, existing tasks will continue to run, but new features can not be. Allocated and new tasks can not be launched.
To reduce the possibility of this occurring, Mesos uses multiples, one active and several backups in case of failure.
Whoever coordinates the election of the new master is the Zookeper.

Mesos also use Apache Zookeeper, part of Hadoop, to synchronize distributed processes to ensure all clients receive consistent data and assure fault tolerance.

Nodes Discovery -> Is done by Zookeeper

When a network partition occurs and disconects a component (master, agent, or schedule) from the ZooKeeper, the master detects and induces a timeout.

Observability Metrics

The information reported by the mesos includes details about availability of resources, use of resources, registered frameworks,
Active agents and tasks state.
It is possible to create automated alerts and put different metrics in a dashboard.

Mesos provides two types of metrics:

Counter -> Accompanying the growth and the reduction of events

Gauges -> Represents some values of instantant magnitude

When you start a task, you can create a volume that exists outside the BOX of the task and persist even after the task is executed or completed.
Persistent Volumes

Mesos provides a mechanism to create a persistent volume of disk resources.
When the task finishes, its capabilities - including the persistent volume - can be offered back to the structure so that the structure can start the same task again, start a recovery task, or start a new task that consumes the previous task output as Your entry.
Persistent volumes allow services such as HDFS and Cassandra to store their data within the Mesos. 

The Mesos Replicated Log

Mesos provides a library that allows you to create fault-tolerant replicated logs;
This library is known as the replicated log.
The Mesos master uses this library to store cluster state in a replicated and durable way;
The library is also available for use by frameworks to store the replicated structure state or to implement the common pattern of "replicated state machine".
Replicated Log is often used to allow applications to manage the replicated state in a strong consistency. 

Mesos  Frameworks:

  • Vamp is a deployment and workflow tool for container orchestration systems, including Mesos/Marathon. It brings canary releasing, A/B testing, auto scaling and self healing through a web UI, CLI and REST API.
  • Aurora is a service scheduler that runs on top of Mesos, enabling you to run long-running services that take advantage of Mesos' scalability, fault-tolerance, and resource isolation.
  • Marathon is a private PaaS built on Mesos. It automatically handles hardware or software failures and ensures that an app is “always on”.
  • Spark is a fast and general-purpose cluster computing system which makes parallel jobs easy to write.
  • Chronos is a distributed job scheduler that supports complex job topologies. It can be used as a more fault-tolerant replacement for Cron.

Mesos offers many of the features that you would expect from a cluster manager, such as:

  • Scalability to over 10,000 nodes
  • Resource isolation for tasks through Linux Containers
  • Efficient CPU and memory-aware resource scheduling
  • Highly-available master through Apache ZooKeeper
  • Web UI for monitoring cluster state

sexta-feira, 3 de fevereiro de 2017

JSON Web Token, Security for applications

JSON Web Token called JWT, is an open standard RFC 7519 that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. 
Each information can be verified and trusted because it is digitally signed. 
JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

Some importants concepts:
  • Compact: JWTs can be sent through a URL, POST parameter, or inside an HTTP header. 
  • Self-contained: The payload contains all the required information about the user, avoiding the need to query the database more than once.

When should you use JSON Web Tokens:
  • Authentication: This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. 
  • Information Exchange: JSON Web Tokens are a good way of securely transmitting information between parties, because as they can be signed, for example using public/private key pairs, you can be sure that the senders are who they say they are. 

JWT Structure:

A complete JWT is represented something like this:


This token could be sliced in 3 parts:




Each part is separated by “.” 
<base64url-encoded header>.<base64url-encoded claims>.<base64url-encoded signature>

Here one simple sample of the flow to authentication of User to access a API Server.

terça-feira, 3 de janeiro de 2017

Automating Infrastructure on Premise or Cloud with Ansible

Ansible Tasks are idempotent. Without a lot of extra coding, bash scripts are usually not safety run again and again. Ansible uses "Facts", which is system and environment information it gathers ("context") before running Tasks.

Design Principles

  • Have a dead simple setup process and a minimal learning curve
  • Manage machines very quickly and in parallel
  • Avoid custom-agents and additional open ports, be agentless by leveraging the existing SSH daemon
  • Describe infrastructure in a language that is both machine and human friendly
  • Focus on security and easy auditability/review/rewriting of content
  • Manage new remote machines instantly, without bootstrapping any software
  • Allow module development in any dynamic language, not just Python
  • Be usable as non-root
  • Be the easiest IT automation system to use, ever.

Ansible by default manages machines over the SSH protocol.

Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.

Playbooks could be considered the main concept in Ansible.

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way.

Playbooks are designed to be human-readable and are developed in a basic text language.

Playbooks are expressed in YAML format and have a syntax, which intentionally tries to not be a programming language or script, but rather a model of a configuration or a process.

In my example, set two virtual machines with Vagrant, where the first I put Ansible installed and the second I applied some configurations.

Configure multi-machine like this in my previous post

Vagrantfile to multi-machine:

 Vagrant.configure(2) do |config|  
  config.vm.box = "ubuntu/trusty64"  
  config.vm.define "machine1" do |node1|  
    node1.vm.network "private_network", ip: ""  
    node1.vm.hostname = "machine1"  
    node1.vm.provider "virtualbox" do |v|  
     v.memory = 1024  
     v.cpus = 1  
  config.vm.define "machine2" do |node2|  
    node2.vm.network "private_network", ip: ""  
    node2.vm.hostname = "machine2"  
    node2.vm.provider "virtualbox" do |v|  
     v.memory = 1024  
     v.cpus = 1  

On machine1 install Ansible with these commands below:

#vagrant ssh machine1

If ask for password put “vagrant"

Commands to Install Ansible:

  1.  sudo apt-get install software-properties-common
  2.  sudo apt-add-repository ppa:ansible/ansible
  3.  sudo apt-get update
  4.  sudo apt-get install ansible

Edit /etc/ansible/hosts  and add IPs ( ,

To check if everything ok run this command: 

ansible all -m ping -s -k -u vagrant

Result should be:
machine2 | SUCCESS => {
    "changed": false,
    "ping": "pong"


First Playbook is to install java and tomcat in second machine.

playbook-tomcat.yml :

 - hosts: machine2  
   http_port: 80  
   max_clients: 200  
  remote_user: vagrant  
   - name: updates a server  
    apt: update_cache=yes  
   - name: upgrade a server  
    apt: upgrade=full  
   - name: install java   
    apt: name=default-jdk state=latest  
   - name: install tomcat  
    apt: name=tomcat7 state=latest  
   - name: make sure apache is running  
    service: name=tomcat7 state=started  

ansible-playbook playbook-tomcat.yml -sudo -u vagrant --ask-pass