Microsoft’s divide and conqueror

Active directory and LDAP are to most used authentication tools in the world today, used by many companies and on-line services to authenticate and authorise users for accessing the provided resources.

We have been working on a project for a customer and in it we had to use the company’s Active directory for authenticating users to the UI of the platform, the system is running on Linux so we configured LDAP to query the AD.

Microsoft is proud to announce that it is the largest contributor to open source projects on github today, and declaring that they “embrace” open-source (“look you can even run Ms-SQL on Linux”).

So far everything sounds very nice and simple, and then we tried to get the list of users and we encountered a simple ugly frustrating truth:

 Contributing != Collaborating 

Starting in 2003 Microsoft has added a limitation into the AD configuration that disallows any other protocol that queries the AD from getting a list of records that is longer then 1000 , if you want the list of users and the company has 1002 , your LDAP query will only give back either the first 1000 or the last (depends on your filters) and if you have more then 10,000 , you are in a big problem.

There is a “fix” that can allow you to get more then the 1000 results posted on the Microsoft tech net, but it might not be suitable for everyone as not every Linux System person may have access or the cooperation of the company’s IT to implement it.

This behaviour persisted in the 2008 and later versions of the AD platform so we can see that Microsoft might be “embracing” Open-Source but they are very far from “integrate with Open Source”, As we can imagine the change to allow these 2 tools to work seamlessly with each  other should not be that complex, as it was possible in the past, but it seems that the commercial aspect [e.g. force others to move to AD] is the prevailing thought that stops the change.

A Foray into Docker

One of our clients is attempting to break his product to micro services and docker containerisation and asked us to help in building the Docker containers and images.

Having little exposure to docker prior to this engagement our consultant had his doubts about the ability to break the requested component to a container in the allocated time for the task.

The on-line and learning Apps that are available for free were a great help, and in a matter of 2 days he was able to provide with the client a base line image that can server not just for the specified project, but also for other parts of the overall solution, with only minor alteration that can be added to the Dockerfile.

Looking at the project now, the consultant said he will be able to commit to the timetable and might be able to provide more capabilities to the containerisation plan.

The single issue that was encountered and might not be well explained or documented is the ability to export/save/import images, it seemed logical that you can export a layer from an existing image without the need to run the container, however that is not the case.*

When you want to apply a layer to the image and save it for further building, you must “create” a container by running the image and only then export it to a file, We have already build the image using the “docker build” command, why not take that image and export it with the new added layer to allow the creation of the new “base”  image ?  Seems a bit counter productive.

We understand that the running of the container is used for testing and ensuring that the build was successful, but shouldn’t that be the choice of the builder ,if a builder removes the cmd “echo IT works” directive at the end of the Dockerfile, it should be his choice if he wants to run the container or immediately export it as a new image .

Despite that small issue, we believe that Docker is a great tool, and will work to deepen our exposure to it.

*All the comments above refer to the CLI interactions, if there are tools that overcome that issue, we have not used them and as such can not speculate on their usability.

 

 

 

Simple things in life

In many places we work we have to manage a client’s servers and remembering IP’s and names can be tiresome or very confusing.

SSH has a wonderful ability to allow configuring “shortcuts” that you can configure to associate names to servers and even define the login you want to use for each one, the problem becomes that you need to manage the file and add those entry manually (or even remember to do that).

A simple bash function can help make life easier :

mkshtc () {
_user=$(echo $1 | awk ‘{split($1,a,”@”); print a[1]}’ )
_host=$(echo $1 | awk ‘{split($1,a,”@”); print a[2]}’ )
printf “Host $2\\n\\tHostname $_host\\n\\tUser $_user\\n” >> ~/.ssh/config
}

using this in your .bashrc file you can add the new servers to the config file in a 1-liner

mkshtc admin@192.168.0.11 glare_node_1

and the entry is added to your .ssh/config file

# cat ~/.ssh/config
Host glare_node_1
Hostname 192.168.0.11
User admin

Easy, simple and now you can use names for your ssh connections (think of it as a DNS for your logins), the function can easily be expanded to include other attributes for the host like port and identity File:

mkshtc () {
_user=$(echo $1 | awk '{split($1,a,"@"); print a[1]}' )
_host=$(echo $1 | awk '{split($1,a,"@"); print a[2]}' )
_file=$(echo $1 | awk '{print $2}' )
printf "Host $2\\n\\tHostname $_host\\n\\tUser $_user\\n\\tIdentityFile ~/.ssh/$_file\\n" >> ~/.ssh/config
}

Just make sure it is the right one for your needs.

Talking at Icinga Camp Amsterdam

 

amsterdam

Aiki Linux will be delivering a talk in the upcoming Icinga Camp Amsterdam on the 28 of June.
Our founder, Assaf Flatto will be speaking about when you need to start your monitoring.

Check out all of the talks and other topics covered at the Icinga camp at www.icinga.org , there are many interesting items and talks about the development of Icinga.
If you would like to schedule a chat with Assaf please drop us a line so we can schedule it.

A new Meetup session – all about the monitoring

Monitoring, monitoring, monitoring

Thursday, May 19, 2016, 6:30 PM

Campus TLV
Yigal Alon 98,Electra tower, 34th floor Tel Aviv-Yafo, IL

14 Spiders in the Web Attending

This meetup is a joint event with Devops-in-Israel  group and is all about Monitoring! We have two great speakers who will talk about their views on when and how to monitor your infrastructure. Don’t forget to invite your developer friends, as they will find it interesting as well.1. Monitoring – When to start? by Assaf FlattoOpen source monito…

Check out this Meetup →

Icinga: from stand-alone to Cloud oriented evolution

When you start talking to System Administrators, DevOps engineers, Developers or NOC personal about open source NMS* tools, the first one that comes to mind is Nagios®. When you tell them that Icinga is a fork of Nagios, they are quick to dismiss it too, due to the reputation that Nagios carries with it.

Icinga owes its origin to Nagios’ limitations. It started as a fork from the Nagios code, and as such was encumbered with the limitation of the original design: Classic Data Centre computing – bare metal and manual management, but it evolved to something quite different.

Icinga was born due to the slow progress of the Nagios project in response to requests made by the community and the inability of that same community to contribute, send updates and improvements upstream to the core product. Wanting to allow the community to have its say, Icinga started by asking for improvement ideas. Many improvements were added to the Icinga code but the main request that was coming back from users was: redundancy and adaptability.

Creative solutions thought of: integrations with CMDBs, active-passive fail-over with data syncing etc. but none of those truly solved the inherent problem that was in Nagios – centralized standalone box that required constant updating to keep up with new servers or services.

The Icinga team knew they had to adapt the project to the Cloud frame of thought and mode of adaptability. Thus Icinga2 and the Icinga2 API were conceived and developed. Icinga2 was designed from the ground up, forgoing the old code base and starting from a fresh view. Some principals were brought over (hosts, services, hostgroups, servicegroups and dependencies) but the approach to those concepts was altered:

  • The configuration scheme was changed to become a configuration DSL which allows quasi-code to be incorporated in the definition and includes conditional logic for improve dynamic adaptation to the actions taken.

  • While the old classic UI works seamlessly with the new engine, a new Web2.0 User Interface (titled Icinga2 Web2) was designed, and includes the capabilities to incorporate other tools that expand the monitoring exposure to the Dashboard, like ELK* stack, Graphite, Nagvis and PNP4Nagios.

  • Icinga2 comes with a truly distributed solution that allows monitoring in geographically separated zones to sync and transfer HA capability for redundancy, which is supported by a SSL based communication to increase the security of the data transferred between nodes.

  • Icinga2 API is a fully fledged REST API that allows registration and configuration of nodes and services at run time without the need to modify configuration files or restart the main process for the systems to be monitored or be removed (for example: when a cloud instance goes off-line).

  • Icinga1 was available as a preconfigured VM* image to those that wanted to test and learn how to use the system. Moving along with the times, Icinga2 comes in other ways that are easier and faster to adopt and deploy: Docker & Vagrant. The Icinga team also provides a ready to use play-books/recipes/manifests for the most popular configuration tools: Puppet, Chef and Ansible.

If you are looking to learn about how implement Icinga2 into your organization, all those developments and more topics are covered in the official training courses provided by Icinga training partners, like Aiki Linux Ltd.

Glossary:

NMS = Network Monitoring System, a software tool that enables querying the state of the nodes that exist in the company’s infrastructure (local & remote).

ELK = ElasticSearch, LogStash & Kibana stack, a combination of software tools that allows collection and aggregation of logs and data for analysis and study.

CMDB = Configuration Management Database 

New Offices

We are proud to announce the opening of our new branch and offices in Israel.

After a continuous deliberations and consideration we have decided to open a branch in Israel to spread our growing presence and be able to provide better services in the region.