Finally NRPE is adding True SSL security connection

After so many years of people pointing out that the so called security of the NRPE agent is not really a valid security, it seem that the developers working on the NRPE project have finally taken those concerns to heart and incorporated  a proper SSL security configuration method to the agent :

# SSL/TLS OPTIONS
# These directives allow you to specify how to use SSL/TLS.

# SSL VERSION
# This can be any of: SSLv2 (only use SSLv2), SSLv2+ (use any version),
# SSLv3 (only use SSLv3), SSLv3+ (use SSLv3 or above), TLSv1 (only use
# TLSv1), TLSv1+ (use TLSv1 or above), TLSv1.1 (only use TLSv1.1),
# TLSv1.1+ (use TLSv1.1 or above), TLSv1.2 (only use TLSv1.2),
# TLSv1.2+ (use TLSv1.2 or above)
# If an “or above” version is used, the best will be negotiated. So if both
# ends are able to do TLSv1.2 and use specify SSLv2, you will get TLSv1.2.

#ssl_version=SSLv2+

# SSL USE ADH
# This is for backward compatibility and is DEPRECATED. Set to 1 to enable
# ADH or 2 to require ADH. 1 is currently the default but will be changed
# in a later version.

#ssl_use_adh=1

# SSL CIPHER LIST
# This lists which ciphers can be used. For backward compatibility, this
# defaults to ‘ssl_cipher_list=ALL:!MD5:@STRENGTH’ in this version but
# will be changed to something like the example below in a later version of NRPE.

#ssl_cipher_list=ALL:!MD5:@STRENGTH
#ssl_cipher_list=ALL:!aNULL:!eNULL:!SSLv2:!LOW:!EXP:!RC4:!MD5:@STRENGTH

# SSL Certificate and Private Key Files

#ssl_cacert_file=/etc/ssl/servercerts/ca-cert.pem
#ssl_cert_file=/etc/ssl/servercerts/nagios-cert.pem
#ssl_privatekey_file=/etc/ssl/servercerts/nagios-key.pem

# SSL USE CLIENT CERTS
# This options determines client certificate usage.
# Values: 0 = Don’t ask for or require client certificates (default)
# 1 = Ask for client certificates
# 2 = Require client certificates

#ssl_client_certs=0

# SSL LOGGING
# This option determines which SSL messages are send to syslog. OR values
# together to specify multiple options.

# Values: 0x00 (0) = No additional logging (default)
# 0x01 (1) = Log startup SSL/TLS parameters
# 0x02 (2) = Log remote IP address
# 0x04 (4) = Log SSL/TLS version of connections
# 0x08 (8) = Log which cipher is being used for the connection
# 0x10 (16) = Log if client has a certificate
# 0x20 (32) = Log details of client’s certificate if it has one
# -1 or 0xff or 0x2f = All of the above

#ssl_logging=0x00

 

This is a massive step up from the way it was before, now we need to see if the nrpe plugin (a.k.a check_nrpe ) or the Nagios Core has also been updated to include the directives for the communication to the improved NRPE.

A new Meetup session – all about the monitoring

Monitoring, monitoring, monitoring

Thursday, May 19, 2016, 6:30 PM

Campus TLV
Yigal Alon 98,Electra tower, 34th floor Tel Aviv-Yafo, IL

14 Spiders in the Web Attending

This meetup is a joint event with Devops-in-Israel  group and is all about Monitoring! We have two great speakers who will talk about their views on when and how to monitor your infrastructure. Don’t forget to invite your developer friends, as they will find it interesting as well.1. Monitoring – When to start? by Assaf FlattoOpen source monito…

Check out this Meetup →

Icinga: from stand-alone to Cloud oriented evolution

When you start talking to System Administrators, DevOps engineers, Developers or NOC personal about open source NMS* tools, the first one that comes to mind is Nagios®. When you tell them that Icinga is a fork of Nagios, they are quick to dismiss it too, due to the reputation that Nagios carries with it.

Icinga owes its origin to Nagios’ limitations. It started as a fork from the Nagios code, and as such was encumbered with the limitation of the original design: Classic Data Centre computing – bare metal and manual management, but it evolved to something quite different.

Icinga was born due to the slow progress of the Nagios project in response to requests made by the community and the inability of that same community to contribute, send updates and improvements upstream to the core product. Wanting to allow the community to have its say, Icinga started by asking for improvement ideas. Many improvements were added to the Icinga code but the main request that was coming back from users was: redundancy and adaptability.

Creative solutions thought of: integrations with CMDBs, active-passive fail-over with data syncing etc. but none of those truly solved the inherent problem that was in Nagios – centralized standalone box that required constant updating to keep up with new servers or services.

The Icinga team knew they had to adapt the project to the Cloud frame of thought and mode of adaptability. Thus Icinga2 and the Icinga2 API were conceived and developed. Icinga2 was designed from the ground up, forgoing the old code base and starting from a fresh view. Some principals were brought over (hosts, services, hostgroups, servicegroups and dependencies) but the approach to those concepts was altered:

  • The configuration scheme was changed to become a configuration DSL which allows quasi-code to be incorporated in the definition and includes conditional logic for improve dynamic adaptation to the actions taken.

  • While the old classic UI works seamlessly with the new engine, a new Web2.0 User Interface (titled Icinga2 Web2) was designed, and includes the capabilities to incorporate other tools that expand the monitoring exposure to the Dashboard, like ELK* stack, Graphite, Nagvis and PNP4Nagios.

  • Icinga2 comes with a truly distributed solution that allows monitoring in geographically separated zones to sync and transfer HA capability for redundancy, which is supported by a SSL based communication to increase the security of the data transferred between nodes.

  • Icinga2 API is a fully fledged REST API that allows registration and configuration of nodes and services at run time without the need to modify configuration files or restart the main process for the systems to be monitored or be removed (for example: when a cloud instance goes off-line).

  • Icinga1 was available as a preconfigured VM* image to those that wanted to test and learn how to use the system. Moving along with the times, Icinga2 comes in other ways that are easier and faster to adopt and deploy: Docker & Vagrant. The Icinga team also provides a ready to use play-books/recipes/manifests for the most popular configuration tools: Puppet, Chef and Ansible.

If you are looking to learn about how implement Icinga2 into your organization, all those developments and more topics are covered in the official training courses provided by Icinga training partners, like Aiki Linux Ltd.

Glossary:

NMS = Network Monitoring System, a software tool that enables querying the state of the nodes that exist in the company’s infrastructure (local & remote).

ELK = ElasticSearch, LogStash & Kibana stack, a combination of software tools that allows collection and aggregation of logs and data for analysis and study.

CMDB = Configuration Management Database 

Elastic search and Logstash or Dev driven infrastrucure

In the past couple of weeks I have been working on implementing an elasticsearch solution, combined with logstash we hope to implement it to replace the existing Splunk system that exist within the infrastructure.

I have build a Chef cookbook to implement it, and with in the confines of the testing it worked,  then one of the developers “complained” that there is a newer version and as they use it in their environment we should use it too.

Suffice to say that I had to spend the entire day trying to fix it in the system we are attempting to have in production, only to eventually learn that the versions are incompatible for the method we want to use and then I had to roll back all the version and re-apply the system.

In many places I have witnessed a mentality were the developers are setting the standard policy on tools and implementation – which more often then not usually backfires when the lead developer leaves or a system admin gets “dumped” with a tool and is expected to know it from day 1, and when it breaks, no one asked the developers anything and the blame falls on the ops person.

 

So today when the developer put his input about upgrading the component that was incompatible, instead of rolling back to a version we know that works, I told him flat out – No, and only when we System decide we want to upgrade – we will do it – not on their wishes.

Chef, Git and Ruby

In the last month I have been working on building chef configuration for deployment and management of the infrastructure of the current client,it is a very complex setting for an ISP and many bespoke setting and applications so using any existing chef recipes will need to be modified heavily that we are creating them from the start.

The choice of Git as a version control system was dictated by the development team and the more I work with it the more I learn to HATE it, it is full of features to a  point that it is becoming annoying and when you want to drop a section and move to work on something else it twists your arm to act in a specific way that I find upsetting.

Chef and ruby I am getting learning to accept and  becoming competent in their use , so much so that a project I was planning to work on in Puppet initially , I am now contemplating to migrate to Chef.