[PROPOSAL] Karaf Decanter monitoring

classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PROPOSAL] Karaf Decanter monitoring

jbonofre
Hi all,

First of all, sorry for this long e-mail ;)

Some weeks ago, I blogged about the usage of ELK
(Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
provide a monitoring dashboard (know what's happen in Karaf and be able
to store it for a long period):

http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/

If this solution works fine, there are some drawbacks:
- it requires additional middlewares on the machines. Additionally to
Karaf itself, we have to install logstash, elasticsearch nodes, and
kibana console
- it's not usable "out of the box": you need at least to configure
logstash (with the different input/output plugins), kibana (to create
the dashboard that you need)
- it doesn't cover all the monitoring needs, especially in term of SLA:
we want to be able to raise some alerts depending of some events (for
instance, when a regex is match in the log messages, when a feature is
uninstalled, when a JMX metric is greater than a given value, etc)

Actually, Karaf (and related projects) already provides most (all) data
required for the monitoring. However, it would be very helpful to have a
"glue", ready to use and more user friendly, including a storage of the
metrics/monitoring data.

Regarding this, I started a prototype of a monitoring solution for Karaf
and the applications running in Karaf.
The purpose is to be very extendible, flexible, easy to install and use.

In term of architecture, we can find the following component:

1/ Collectors & SLA Policies
The collectors are services responsible of harvesting monitoring data.
We have two kinds of collectors:
- the polling collectors are invoked by a scheduler periodically.
- the event driven collectors react to some events.
Two collectors are already available:
- the JMX collector is a polling collector which harvest all MBeans
attributes
- the Log collector is a event driven collector, implementing a
PaxAppender which react when a log message occurs
We can planned the following collectors:
- a Camel Tracer collector would be an event driven collector, acting as
a Camel Interceptor. It would allow to trace any Exchange in Camel.

It's very dynamic (thanks to OSGi services), so it's possible to add a
new custom collector (user/custom implementation).

The Collectors are also responsible of checking the SLA. As the SLA
policies are tight to the collected data, it makes sense that the
collector validates the SLA and call/delegate the alert to SLA services.

2/ Scheduler
The scheduler service is responsible to call the Polling Collectors,
gather the harvested data, and delegate to the dispatcher.
We already have a simple scheduler (just a thread), but we can plan a
quartz scheduler (for advanced cron/trigger configuration), and another
one leveraging the Karaf scheduler.

3/ Dispatcher
The dispatcher is called by the scheduler or the event driven collectors
to dispatch the collected data to the appenders.

4/ Appenders
The appender services are responsible to send/store the collected data
to target systems.
For now, we have two appenders:
- a log appender which just log the collected data
- a elasticsearch appender which send the collected data to a
elasticsearch instance. For now, it uses "external" elasticsearch, but
I'm working on an elasticsearch feature allowing to embed elasticsearch
in Karaf (it's mostly done).
We can plan the following other appenders:
- redis to send the collected data in Redis messaging system
- jdbc to store the collected data in a database
- jms to send the collected data to a JMS broker (like ActiveMQ)
- camel to send the collected data to a Camel direct-vm/vm endpoint of a
route (it would create an internal route)

5/ Console/Kibana
The console is composed by two parts:
- a angularjs or bootstrap layer allowing to configure the SLA and
global settings
- embedded kibana instance with pre-configured dashboard (when the
elasticsearch appender is used). We will have a set of already created
lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
template. The kibana instance will be embedded in Karaf (not external).

Of course, we have ready to use features, allowing to very easily
install modules that we want.

I named the prototype Karaf Decanter. I don't have preference about the
name, and the location of the code (it could be as Karaf subproject like
Cellar or Cave, or directly in the Karaf codebase).

Thoughts ?

Regards
JB
--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

ksobkowiak
+1

I think it's a good idea. It's good to have a monitoring functionality
for Karaf.  I would prefer to make it as a separate subproject like
Cellar, to make the Karaf code base simply and could have a separate
release cycle (from the same reason we had plans to extract enterprise
features in a separate subproject). It could be an Karaf odd-on. Karaf
Decanter is a good name.

Regards
Krzysztof

On 14.10.2014 17:12, Jean-Baptiste Onofré wrote:

> Hi all,
>
> First of all, sorry for this long e-mail ;)
>
> Some weeks ago, I blogged about the usage of ELK
> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> provide a monitoring dashboard (know what's happen in Karaf and be
> able to store it for a long period):
>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>
>
> If this solution works fine, there are some drawbacks:
> - it requires additional middlewares on the machines. Additionally to
> Karaf itself, we have to install logstash, elasticsearch nodes, and
> kibana console
> - it's not usable "out of the box": you need at least to configure
> logstash (with the different input/output plugins), kibana (to create
> the dashboard that you need)
> - it doesn't cover all the monitoring needs, especially in term of
> SLA: we want to be able to raise some alerts depending of some events
> (for instance, when a regex is match in the log messages, when a
> feature is uninstalled, when a JMX metric is greater than a given
> value, etc)
>
> Actually, Karaf (and related projects) already provides most (all)
> data required for the monitoring. However, it would be very helpful to
> have a "glue", ready to use and more user friendly, including a
> storage of the metrics/monitoring data.
>
> Regarding this, I started a prototype of a monitoring solution for
> Karaf and the applications running in Karaf.
> The purpose is to be very extendible, flexible, easy to install and use.
>
> In term of architecture, we can find the following component:
>
> 1/ Collectors & SLA Policies
> The collectors are services responsible of harvesting monitoring data.
> We have two kinds of collectors:
> - the polling collectors are invoked by a scheduler periodically.
> - the event driven collectors react to some events.
> Two collectors are already available:
> - the JMX collector is a polling collector which harvest all MBeans
> attributes
> - the Log collector is a event driven collector, implementing a
> PaxAppender which react when a log message occurs
> We can planned the following collectors:
> - a Camel Tracer collector would be an event driven collector, acting
> as a Camel Interceptor. It would allow to trace any Exchange in Camel.
>
> It's very dynamic (thanks to OSGi services), so it's possible to add a
> new custom collector (user/custom implementation).
>
> The Collectors are also responsible of checking the SLA. As the SLA
> policies are tight to the collected data, it makes sense that the
> collector validates the SLA and call/delegate the alert to SLA services.
>
> 2/ Scheduler
> The scheduler service is responsible to call the Polling Collectors,
> gather the harvested data, and delegate to the dispatcher.
> We already have a simple scheduler (just a thread), but we can plan a
> quartz scheduler (for advanced cron/trigger configuration), and
> another one leveraging the Karaf scheduler.
>
> 3/ Dispatcher
> The dispatcher is called by the scheduler or the event driven
> collectors to dispatch the collected data to the appenders.
>
> 4/ Appenders
> The appender services are responsible to send/store the collected data
> to target systems.
> For now, we have two appenders:
> - a log appender which just log the collected data
> - a elasticsearch appender which send the collected data to a
> elasticsearch instance. For now, it uses "external" elasticsearch, but
> I'm working on an elasticsearch feature allowing to embed
> elasticsearch in Karaf (it's mostly done).
> We can plan the following other appenders:
> - redis to send the collected data in Redis messaging system
> - jdbc to store the collected data in a database
> - jms to send the collected data to a JMS broker (like ActiveMQ)
> - camel to send the collected data to a Camel direct-vm/vm endpoint of
> a route (it would create an internal route)
>
> 5/ Console/Kibana
> The console is composed by two parts:
> - a angularjs or bootstrap layer allowing to configure the SLA and
> global settings
> - embedded kibana instance with pre-configured dashboard (when the
> elasticsearch appender is used). We will have a set of already created
> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> template. The kibana instance will be embedded in Karaf (not external).
>
> Of course, we have ready to use features, allowing to very easily
> install modules that we want.
>
> I named the prototype Karaf Decanter. I don't have preference about
> the name, and the location of the code (it could be as Karaf
> subproject like Cellar or Cave, or directly in the Karaf codebase).
>
> Thoughts ?
>
> Regards
> JB


--
Krzysztof Sobkowiak

JEE & OSS Architect | Senior Solution Architect @ Capgemini | Committer
@ ASF
Capgemini <http://www.pl.capgemini.com/> | Software Solutions Center
<http://www.pl.capgemini-sdm.com/> | Wroclaw
e-mail: [hidden email] <mailto:[hidden email]> |
Twitter: @KSobkowiak
Calendar: http://goo.gl/yvsebC
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Matt Sicker
I never heard of a decanter before, but now that I have, it's an awesome
name.

On 14 October 2014 11:06, Krzysztof Sobkowiak <[hidden email]>
wrote:

> +1
>
> I think it's a good idea. It's good to have a monitoring functionality
> for Karaf.  I would prefer to make it as a separate subproject like
> Cellar, to make the Karaf code base simply and could have a separate
> release cycle (from the same reason we had plans to extract enterprise
> features in a separate subproject). It could be an Karaf odd-on. Karaf
> Decanter is a good name.
>
> Regards
> Krzysztof
>
> On 14.10.2014 17:12, Jean-Baptiste Onofré wrote:
> > Hi all,
> >
> > First of all, sorry for this long e-mail ;)
> >
> > Some weeks ago, I blogged about the usage of ELK
> > (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> > provide a monitoring dashboard (know what's happen in Karaf and be
> > able to store it for a long period):
> >
> >
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
> >
> >
> > If this solution works fine, there are some drawbacks:
> > - it requires additional middlewares on the machines. Additionally to
> > Karaf itself, we have to install logstash, elasticsearch nodes, and
> > kibana console
> > - it's not usable "out of the box": you need at least to configure
> > logstash (with the different input/output plugins), kibana (to create
> > the dashboard that you need)
> > - it doesn't cover all the monitoring needs, especially in term of
> > SLA: we want to be able to raise some alerts depending of some events
> > (for instance, when a regex is match in the log messages, when a
> > feature is uninstalled, when a JMX metric is greater than a given
> > value, etc)
> >
> > Actually, Karaf (and related projects) already provides most (all)
> > data required for the monitoring. However, it would be very helpful to
> > have a "glue", ready to use and more user friendly, including a
> > storage of the metrics/monitoring data.
> >
> > Regarding this, I started a prototype of a monitoring solution for
> > Karaf and the applications running in Karaf.
> > The purpose is to be very extendible, flexible, easy to install and use.
> >
> > In term of architecture, we can find the following component:
> >
> > 1/ Collectors & SLA Policies
> > The collectors are services responsible of harvesting monitoring data.
> > We have two kinds of collectors:
> > - the polling collectors are invoked by a scheduler periodically.
> > - the event driven collectors react to some events.
> > Two collectors are already available:
> > - the JMX collector is a polling collector which harvest all MBeans
> > attributes
> > - the Log collector is a event driven collector, implementing a
> > PaxAppender which react when a log message occurs
> > We can planned the following collectors:
> > - a Camel Tracer collector would be an event driven collector, acting
> > as a Camel Interceptor. It would allow to trace any Exchange in Camel.
> >
> > It's very dynamic (thanks to OSGi services), so it's possible to add a
> > new custom collector (user/custom implementation).
> >
> > The Collectors are also responsible of checking the SLA. As the SLA
> > policies are tight to the collected data, it makes sense that the
> > collector validates the SLA and call/delegate the alert to SLA services.
> >
> > 2/ Scheduler
> > The scheduler service is responsible to call the Polling Collectors,
> > gather the harvested data, and delegate to the dispatcher.
> > We already have a simple scheduler (just a thread), but we can plan a
> > quartz scheduler (for advanced cron/trigger configuration), and
> > another one leveraging the Karaf scheduler.
> >
> > 3/ Dispatcher
> > The dispatcher is called by the scheduler or the event driven
> > collectors to dispatch the collected data to the appenders.
> >
> > 4/ Appenders
> > The appender services are responsible to send/store the collected data
> > to target systems.
> > For now, we have two appenders:
> > - a log appender which just log the collected data
> > - a elasticsearch appender which send the collected data to a
> > elasticsearch instance. For now, it uses "external" elasticsearch, but
> > I'm working on an elasticsearch feature allowing to embed
> > elasticsearch in Karaf (it's mostly done).
> > We can plan the following other appenders:
> > - redis to send the collected data in Redis messaging system
> > - jdbc to store the collected data in a database
> > - jms to send the collected data to a JMS broker (like ActiveMQ)
> > - camel to send the collected data to a Camel direct-vm/vm endpoint of
> > a route (it would create an internal route)
> >
> > 5/ Console/Kibana
> > The console is composed by two parts:
> > - a angularjs or bootstrap layer allowing to configure the SLA and
> > global settings
> > - embedded kibana instance with pre-configured dashboard (when the
> > elasticsearch appender is used). We will have a set of already created
> > lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> > template. The kibana instance will be embedded in Karaf (not external).
> >
> > Of course, we have ready to use features, allowing to very easily
> > install modules that we want.
> >
> > I named the prototype Karaf Decanter. I don't have preference about
> > the name, and the location of the code (it could be as Karaf
> > subproject like Cellar or Cave, or directly in the Karaf codebase).
> >
> > Thoughts ?
> >
> > Regards
> > JB
>
>
> --
> Krzysztof Sobkowiak
>
> JEE & OSS Architect | Senior Solution Architect @ Capgemini | Committer
> @ ASF
> Capgemini <http://www.pl.capgemini.com/> | Software Solutions Center
> <http://www.pl.capgemini-sdm.com/> | Wroclaw
> e-mail: [hidden email] <mailto:[hidden email]> |
> Twitter: @KSobkowiak
> Calendar: http://goo.gl/yvsebC
>



--
Matt Sicker <[hidden email]>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Łukasz Dywicki
In reply to this post by jbonofre
I think that there is project which might have similar scope - sirona.
I like general idea but I do not like idea of embedding kibana. Forcing
usage of any particular tool is just wrong. It also makes sense to start
supporting codehale metrics since early beginning as this library gets more
and more popular.

+1 from me

Best regards,
Lukasz

2014-10-15 5:17 GMT+02:00 Andreas Pieber <[hidden email]>:

> Hey,
>
> The collection definitely sounds like a perfect idea for a Karaf sub
> project to me. Beside the great potential for the components I like the
> especially fitting name 😊 +1
>
> Kind regards,
> Andreas
> On Oct 14, 2014 5:13 PM, "Jean-Baptiste Onofré" <[hidden email]> wrote:
>
>> Hi all,
>>
>> First of all, sorry for this long e-mail ;)
>>
>> Some weeks ago, I blogged about the usage of ELK (Logstash/Elasticsearch/Kibana)
>> with Karaf, Camel, ActiveMQ, etc to provide a monitoring dashboard (know
>> what's happen in Karaf and be able to store it for a long period):
>>
>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>
>> If this solution works fine, there are some drawbacks:
>> - it requires additional middlewares on the machines. Additionally to
>> Karaf itself, we have to install logstash, elasticsearch nodes, and kibana
>> console
>> - it's not usable "out of the box": you need at least to configure
>> logstash (with the different input/output plugins), kibana (to create the
>> dashboard that you need)
>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>> we want to be able to raise some alerts depending of some events (for
>> instance, when a regex is match in the log messages, when a feature is
>> uninstalled, when a JMX metric is greater than a given value, etc)
>>
>> Actually, Karaf (and related projects) already provides most (all) data
>> required for the monitoring. However, it would be very helpful to have a
>> "glue", ready to use and more user friendly, including a storage of the
>> metrics/monitoring data.
>>
>> Regarding this, I started a prototype of a monitoring solution for Karaf
>> and the applications running in Karaf.
>> The purpose is to be very extendible, flexible, easy to install and use.
>>
>> In term of architecture, we can find the following component:
>>
>> 1/ Collectors & SLA Policies
>> The collectors are services responsible of harvesting monitoring data.
>> We have two kinds of collectors:
>> - the polling collectors are invoked by a scheduler periodically.
>> - the event driven collectors react to some events.
>> Two collectors are already available:
>> - the JMX collector is a polling collector which harvest all MBeans
>> attributes
>> - the Log collector is a event driven collector, implementing a
>> PaxAppender which react when a log message occurs
>> We can planned the following collectors:
>> - a Camel Tracer collector would be an event driven collector, acting as
>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>
>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>> new custom collector (user/custom implementation).
>>
>> The Collectors are also responsible of checking the SLA. As the SLA
>> policies are tight to the collected data, it makes sense that the collector
>> validates the SLA and call/delegate the alert to SLA services.
>>
>> 2/ Scheduler
>> The scheduler service is responsible to call the Polling Collectors,
>> gather the harvested data, and delegate to the dispatcher.
>> We already have a simple scheduler (just a thread), but we can plan a
>> quartz scheduler (for advanced cron/trigger configuration), and another one
>> leveraging the Karaf scheduler.
>>
>> 3/ Dispatcher
>> The dispatcher is called by the scheduler or the event driven collectors
>> to dispatch the collected data to the appenders.
>>
>> 4/ Appenders
>> The appender services are responsible to send/store the collected data to
>> target systems.
>> For now, we have two appenders:
>> - a log appender which just log the collected data
>> - a elasticsearch appender which send the collected data to a
>> elasticsearch instance. For now, it uses "external" elasticsearch, but I'm
>> working on an elasticsearch feature allowing to embed elasticsearch in
>> Karaf (it's mostly done).
>> We can plan the following other appenders:
>> - redis to send the collected data in Redis messaging system
>> - jdbc to store the collected data in a database
>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>> route (it would create an internal route)
>>
>> 5/ Console/Kibana
>> The console is composed by two parts:
>> - a angularjs or bootstrap layer allowing to configure the SLA and global
>> settings
>> - embedded kibana instance with pre-configured dashboard (when the
>> elasticsearch appender is used). We will have a set of already created
>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard template.
>> The kibana instance will be embedded in Karaf (not external).
>>
>> Of course, we have ready to use features, allowing to very easily install
>> modules that we want.
>>
>> I named the prototype Karaf Decanter. I don't have preference about the
>> name, and the location of the code (it could be as Karaf subproject like
>> Cellar or Cave, or directly in the Karaf codebase).
>>
>> Thoughts ?
>>
>> Regards
>> JB
>> --
>> Jean-Baptiste Onofré
>> [hidden email]
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

jbonofre
For sirona, the scope is the same, but the implementation/view is
different (I'm Sirona PPMC ;)). However, I see Decanter being able to
send/interact with Sirona.

As explained in the proposal, I don't want to "external" middlewares for
monitoring (for now Sirona runs in Tomcat for instance).

Kibana is available as a feature, but it's optional: if the users wants
a ready to use solution, they can install decanter-collector-*,
decanter-simple-scheduler, decanter-appender-elasticsearch,
elasticsearch, and kibana features. But if they don't want to use Kibana
or Elasticsearch, they can use alternative appender
(decanter-appender-jdbc, decanter-appender-zabbix,
decanter-appender-nagios, or decanter-appender-sirona for instance).

For codehale, good idea. Let me take a look of a collector for that.

Regards
JB

On 10/15/2014 07:53 AM, Łukasz Dywicki wrote:

> I think that there is project which might have similar scope - sirona.
> I like general idea but I do not like idea of embedding kibana. Forcing
> usage of any particular tool is just wrong. It also makes sense to start
> supporting codehale metrics since early beginning as this library gets more
> and more popular.
>
> +1 from me
>
> Best regards,
> Lukasz
>
> 2014-10-15 5:17 GMT+02:00 Andreas Pieber <[hidden email]>:
>
>> Hey,
>>
>> The collection definitely sounds like a perfect idea for a Karaf sub
>> project to me. Beside the great potential for the components I like the
>> especially fitting name 😊 +1
>>
>> Kind regards,
>> Andreas
>> On Oct 14, 2014 5:13 PM, "Jean-Baptiste Onofré" <[hidden email]> wrote:
>>
>>> Hi all,
>>>
>>> First of all, sorry for this long e-mail ;)
>>>
>>> Some weeks ago, I blogged about the usage of ELK (Logstash/Elasticsearch/Kibana)
>>> with Karaf, Camel, ActiveMQ, etc to provide a monitoring dashboard (know
>>> what's happen in Karaf and be able to store it for a long period):
>>>
>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>
>>> If this solution works fine, there are some drawbacks:
>>> - it requires additional middlewares on the machines. Additionally to
>>> Karaf itself, we have to install logstash, elasticsearch nodes, and kibana
>>> console
>>> - it's not usable "out of the box": you need at least to configure
>>> logstash (with the different input/output plugins), kibana (to create the
>>> dashboard that you need)
>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>> we want to be able to raise some alerts depending of some events (for
>>> instance, when a regex is match in the log messages, when a feature is
>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>
>>> Actually, Karaf (and related projects) already provides most (all) data
>>> required for the monitoring. However, it would be very helpful to have a
>>> "glue", ready to use and more user friendly, including a storage of the
>>> metrics/monitoring data.
>>>
>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>> and the applications running in Karaf.
>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>
>>> In term of architecture, we can find the following component:
>>>
>>> 1/ Collectors & SLA Policies
>>> The collectors are services responsible of harvesting monitoring data.
>>> We have two kinds of collectors:
>>> - the polling collectors are invoked by a scheduler periodically.
>>> - the event driven collectors react to some events.
>>> Two collectors are already available:
>>> - the JMX collector is a polling collector which harvest all MBeans
>>> attributes
>>> - the Log collector is a event driven collector, implementing a
>>> PaxAppender which react when a log message occurs
>>> We can planned the following collectors:
>>> - a Camel Tracer collector would be an event driven collector, acting as
>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>
>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>> new custom collector (user/custom implementation).
>>>
>>> The Collectors are also responsible of checking the SLA. As the SLA
>>> policies are tight to the collected data, it makes sense that the collector
>>> validates the SLA and call/delegate the alert to SLA services.
>>>
>>> 2/ Scheduler
>>> The scheduler service is responsible to call the Polling Collectors,
>>> gather the harvested data, and delegate to the dispatcher.
>>> We already have a simple scheduler (just a thread), but we can plan a
>>> quartz scheduler (for advanced cron/trigger configuration), and another one
>>> leveraging the Karaf scheduler.
>>>
>>> 3/ Dispatcher
>>> The dispatcher is called by the scheduler or the event driven collectors
>>> to dispatch the collected data to the appenders.
>>>
>>> 4/ Appenders
>>> The appender services are responsible to send/store the collected data to
>>> target systems.
>>> For now, we have two appenders:
>>> - a log appender which just log the collected data
>>> - a elasticsearch appender which send the collected data to a
>>> elasticsearch instance. For now, it uses "external" elasticsearch, but I'm
>>> working on an elasticsearch feature allowing to embed elasticsearch in
>>> Karaf (it's mostly done).
>>> We can plan the following other appenders:
>>> - redis to send the collected data in Redis messaging system
>>> - jdbc to store the collected data in a database
>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>> route (it would create an internal route)
>>>
>>> 5/ Console/Kibana
>>> The console is composed by two parts:
>>> - a angularjs or bootstrap layer allowing to configure the SLA and global
>>> settings
>>> - embedded kibana instance with pre-configured dashboard (when the
>>> elasticsearch appender is used). We will have a set of already created
>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard template.
>>> The kibana instance will be embedded in Karaf (not external).
>>>
>>> Of course, we have ready to use features, allowing to very easily install
>>> modules that we want.
>>>
>>> I named the prototype Karaf Decanter. I don't have preference about the
>>> name, and the location of the code (it could be as Karaf subproject like
>>> Cellar or Cave, or directly in the Karaf codebase).
>>>
>>> Thoughts ?
>>>
>>> Regards
>>> JB
>>> --
>>> Jean-Baptiste Onofré
>>> [hidden email]
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>
>

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Guillaume Nodet-2
In reply to this post by jbonofre
I've worked on those things a year ago specifically for fabric, so there's
a lot of overlap between what you propose and the insight stuff inside
fabric.
I think we should be able to contribute what we have too, trying to
abstract a bit the things that are fabric specific.
There's plenty of stuff we already have (jmx, camel, jetty and activemq and
pax interceptors, elastic search + indices housekeeping, etc..), so no need
to reinvent the wheel here.
There's definitely a great need and potential here, so I'd love the idea of
collaborating on this area.

2014-10-14 17:12 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:

> Hi all,
>
> First of all, sorry for this long e-mail ;)
>
> Some weeks ago, I blogged about the usage of ELK (Logstash/Elasticsearch/Kibana)
> with Karaf, Camel, ActiveMQ, etc to provide a monitoring dashboard (know
> what's happen in Karaf and be able to store it for a long period):
>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>
> If this solution works fine, there are some drawbacks:
> - it requires additional middlewares on the machines. Additionally to
> Karaf itself, we have to install logstash, elasticsearch nodes, and kibana
> console
> - it's not usable "out of the box": you need at least to configure
> logstash (with the different input/output plugins), kibana (to create the
> dashboard that you need)
> - it doesn't cover all the monitoring needs, especially in term of SLA: we
> want to be able to raise some alerts depending of some events (for
> instance, when a regex is match in the log messages, when a feature is
> uninstalled, when a JMX metric is greater than a given value, etc)
>
> Actually, Karaf (and related projects) already provides most (all) data
> required for the monitoring. However, it would be very helpful to have a
> "glue", ready to use and more user friendly, including a storage of the
> metrics/monitoring data.
>
> Regarding this, I started a prototype of a monitoring solution for Karaf
> and the applications running in Karaf.
> The purpose is to be very extendible, flexible, easy to install and use.
>
> In term of architecture, we can find the following component:
>
> 1/ Collectors & SLA Policies
> The collectors are services responsible of harvesting monitoring data.
> We have two kinds of collectors:
> - the polling collectors are invoked by a scheduler periodically.
> - the event driven collectors react to some events.
> Two collectors are already available:
> - the JMX collector is a polling collector which harvest all MBeans
> attributes
> - the Log collector is a event driven collector, implementing a
> PaxAppender which react when a log message occurs
> We can planned the following collectors:
> - a Camel Tracer collector would be an event driven collector, acting as a
> Camel Interceptor. It would allow to trace any Exchange in Camel.
>
> It's very dynamic (thanks to OSGi services), so it's possible to add a new
> custom collector (user/custom implementation).
>
> The Collectors are also responsible of checking the SLA. As the SLA
> policies are tight to the collected data, it makes sense that the collector
> validates the SLA and call/delegate the alert to SLA services.
>
> 2/ Scheduler
> The scheduler service is responsible to call the Polling Collectors,
> gather the harvested data, and delegate to the dispatcher.
> We already have a simple scheduler (just a thread), but we can plan a
> quartz scheduler (for advanced cron/trigger configuration), and another one
> leveraging the Karaf scheduler.
>
> 3/ Dispatcher
> The dispatcher is called by the scheduler or the event driven collectors
> to dispatch the collected data to the appenders.
>
> 4/ Appenders
> The appender services are responsible to send/store the collected data to
> target systems.
> For now, we have two appenders:
> - a log appender which just log the collected data
> - a elasticsearch appender which send the collected data to a
> elasticsearch instance. For now, it uses "external" elasticsearch, but I'm
> working on an elasticsearch feature allowing to embed elasticsearch in
> Karaf (it's mostly done).
> We can plan the following other appenders:
> - redis to send the collected data in Redis messaging system
> - jdbc to store the collected data in a database
> - jms to send the collected data to a JMS broker (like ActiveMQ)
> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
> route (it would create an internal route)
>
> 5/ Console/Kibana
> The console is composed by two parts:
> - a angularjs or bootstrap layer allowing to configure the SLA and global
> settings
> - embedded kibana instance with pre-configured dashboard (when the
> elasticsearch appender is used). We will have a set of already created
> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard template.
> The kibana instance will be embedded in Karaf (not external).
>
> Of course, we have ready to use features, allowing to very easily install
> modules that we want.
>
> I named the prototype Karaf Decanter. I don't have preference about the
> name, and the location of the code (it could be as Karaf subproject like
> Cellar or Cave, or directly in the Karaf codebase).
>
> Thoughts ?
>
> Regards
> JB
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Filippo Balicchia-2
In reply to this post by jbonofre
Hi Jb,

I like the idea.
thanks for the explanation of the objectives

for me is

+1


Regards

--Filippo



2014-10-15 8:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:

> For sirona, the scope is the same, but the implementation/view is different
> (I'm Sirona PPMC ;)). However, I see Decanter being able to send/interact
> with Sirona.
>
> As explained in the proposal, I don't want to "external" middlewares for
> monitoring (for now Sirona runs in Tomcat for instance).
>
> Kibana is available as a feature, but it's optional: if the users wants a
> ready to use solution, they can install decanter-collector-*,
> decanter-simple-scheduler, decanter-appender-elasticsearch, elasticsearch,
> and kibana features. But if they don't want to use Kibana or Elasticsearch,
> they can use alternative appender (decanter-appender-jdbc,
> decanter-appender-zabbix, decanter-appender-nagios, or
> decanter-appender-sirona for instance).
>
> For codehale, good idea. Let me take a look of a collector for that.
>
> Regards
> JB
>
>
> On 10/15/2014 07:53 AM, Łukasz Dywicki wrote:
>>
>> I think that there is project which might have similar scope - sirona.
>> I like general idea but I do not like idea of embedding kibana. Forcing
>> usage of any particular tool is just wrong. It also makes sense to start
>> supporting codehale metrics since early beginning as this library gets
>> more
>> and more popular.
>>
>> +1 from me
>>
>> Best regards,
>> Lukasz
>>
>> 2014-10-15 5:17 GMT+02:00 Andreas Pieber <[hidden email]>:
>>
>>> Hey,
>>>
>>> The collection definitely sounds like a perfect idea for a Karaf sub
>>> project to me. Beside the great potential for the components I like the
>>> especially fitting name 😊 +1
>>>
>>> Kind regards,
>>> Andreas
>>> On Oct 14, 2014 5:13 PM, "Jean-Baptiste Onofré" <[hidden email]> wrote:
>>>
>>>> Hi all,
>>>>
>>>> First of all, sorry for this long e-mail ;)
>>>>
>>>> Some weeks ago, I blogged about the usage of ELK
>>>> (Logstash/Elasticsearch/Kibana)
>>>> with Karaf, Camel, ActiveMQ, etc to provide a monitoring dashboard (know
>>>> what's happen in Karaf and be able to store it for a long period):
>>>>
>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>
>>>> If this solution works fine, there are some drawbacks:
>>>> - it requires additional middlewares on the machines. Additionally to
>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>> kibana
>>>> console
>>>> - it's not usable "out of the box": you need at least to configure
>>>> logstash (with the different input/output plugins), kibana (to create
>>>> the
>>>> dashboard that you need)
>>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>>> we want to be able to raise some alerts depending of some events (for
>>>> instance, when a regex is match in the log messages, when a feature is
>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>
>>>> Actually, Karaf (and related projects) already provides most (all) data
>>>> required for the monitoring. However, it would be very helpful to have a
>>>> "glue", ready to use and more user friendly, including a storage of the
>>>> metrics/monitoring data.
>>>>
>>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>>> and the applications running in Karaf.
>>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>>
>>>> In term of architecture, we can find the following component:
>>>>
>>>> 1/ Collectors & SLA Policies
>>>> The collectors are services responsible of harvesting monitoring data.
>>>> We have two kinds of collectors:
>>>> - the polling collectors are invoked by a scheduler periodically.
>>>> - the event driven collectors react to some events.
>>>> Two collectors are already available:
>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>> attributes
>>>> - the Log collector is a event driven collector, implementing a
>>>> PaxAppender which react when a log message occurs
>>>> We can planned the following collectors:
>>>> - a Camel Tracer collector would be an event driven collector, acting as
>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>
>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>> new custom collector (user/custom implementation).
>>>>
>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>> policies are tight to the collected data, it makes sense that the
>>>> collector
>>>> validates the SLA and call/delegate the alert to SLA services.
>>>>
>>>> 2/ Scheduler
>>>> The scheduler service is responsible to call the Polling Collectors,
>>>> gather the harvested data, and delegate to the dispatcher.
>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>>> one
>>>> leveraging the Karaf scheduler.
>>>>
>>>> 3/ Dispatcher
>>>> The dispatcher is called by the scheduler or the event driven collectors
>>>> to dispatch the collected data to the appenders.
>>>>
>>>> 4/ Appenders
>>>> The appender services are responsible to send/store the collected data
>>>> to
>>>> target systems.
>>>> For now, we have two appenders:
>>>> - a log appender which just log the collected data
>>>> - a elasticsearch appender which send the collected data to a
>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>> I'm
>>>> working on an elasticsearch feature allowing to embed elasticsearch in
>>>> Karaf (it's mostly done).
>>>> We can plan the following other appenders:
>>>> - redis to send the collected data in Redis messaging system
>>>> - jdbc to store the collected data in a database
>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>>> route (it would create an internal route)
>>>>
>>>> 5/ Console/Kibana
>>>> The console is composed by two parts:
>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>> global
>>>> settings
>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>> elasticsearch appender is used). We will have a set of already created
>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>> template.
>>>> The kibana instance will be embedded in Karaf (not external).
>>>>
>>>> Of course, we have ready to use features, allowing to very easily
>>>> install
>>>> modules that we want.
>>>>
>>>> I named the prototype Karaf Decanter. I don't have preference about the
>>>> name, and the location of the code (it could be as Karaf subproject like
>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>
>>>> Thoughts ?
>>>>
>>>> Regards
>>>> JB
>>>> --
>>>> Jean-Baptiste Onofré
>>>> [hidden email]
>>>> http://blog.nanthrax.net
>>>> Talend - http://www.talend.com
>>>>
>>>
>>
>
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

jbonofre
In reply to this post by jbonofre
Oh by the way, I forgot the github link:

https://github.com/jbonofre/karaf-decanter

Sorry about that guys !

Regards
JB

On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:

> Hi all,
>
> First of all, sorry for this long e-mail ;)
>
> Some weeks ago, I blogged about the usage of ELK
> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> provide a monitoring dashboard (know what's happen in Karaf and be able
> to store it for a long period):
>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>
>
> If this solution works fine, there are some drawbacks:
> - it requires additional middlewares on the machines. Additionally to
> Karaf itself, we have to install logstash, elasticsearch nodes, and
> kibana console
> - it's not usable "out of the box": you need at least to configure
> logstash (with the different input/output plugins), kibana (to create
> the dashboard that you need)
> - it doesn't cover all the monitoring needs, especially in term of SLA:
> we want to be able to raise some alerts depending of some events (for
> instance, when a regex is match in the log messages, when a feature is
> uninstalled, when a JMX metric is greater than a given value, etc)
>
> Actually, Karaf (and related projects) already provides most (all) data
> required for the monitoring. However, it would be very helpful to have a
> "glue", ready to use and more user friendly, including a storage of the
> metrics/monitoring data.
>
> Regarding this, I started a prototype of a monitoring solution for Karaf
> and the applications running in Karaf.
> The purpose is to be very extendible, flexible, easy to install and use.
>
> In term of architecture, we can find the following component:
>
> 1/ Collectors & SLA Policies
> The collectors are services responsible of harvesting monitoring data.
> We have two kinds of collectors:
> - the polling collectors are invoked by a scheduler periodically.
> - the event driven collectors react to some events.
> Two collectors are already available:
> - the JMX collector is a polling collector which harvest all MBeans
> attributes
> - the Log collector is a event driven collector, implementing a
> PaxAppender which react when a log message occurs
> We can planned the following collectors:
> - a Camel Tracer collector would be an event driven collector, acting as
> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>
> It's very dynamic (thanks to OSGi services), so it's possible to add a
> new custom collector (user/custom implementation).
>
> The Collectors are also responsible of checking the SLA. As the SLA
> policies are tight to the collected data, it makes sense that the
> collector validates the SLA and call/delegate the alert to SLA services.
>
> 2/ Scheduler
> The scheduler service is responsible to call the Polling Collectors,
> gather the harvested data, and delegate to the dispatcher.
> We already have a simple scheduler (just a thread), but we can plan a
> quartz scheduler (for advanced cron/trigger configuration), and another
> one leveraging the Karaf scheduler.
>
> 3/ Dispatcher
> The dispatcher is called by the scheduler or the event driven collectors
> to dispatch the collected data to the appenders.
>
> 4/ Appenders
> The appender services are responsible to send/store the collected data
> to target systems.
> For now, we have two appenders:
> - a log appender which just log the collected data
> - a elasticsearch appender which send the collected data to a
> elasticsearch instance. For now, it uses "external" elasticsearch, but
> I'm working on an elasticsearch feature allowing to embed elasticsearch
> in Karaf (it's mostly done).
> We can plan the following other appenders:
> - redis to send the collected data in Redis messaging system
> - jdbc to store the collected data in a database
> - jms to send the collected data to a JMS broker (like ActiveMQ)
> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
> route (it would create an internal route)
>
> 5/ Console/Kibana
> The console is composed by two parts:
> - a angularjs or bootstrap layer allowing to configure the SLA and
> global settings
> - embedded kibana instance with pre-configured dashboard (when the
> elasticsearch appender is used). We will have a set of already created
> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> template. The kibana instance will be embedded in Karaf (not external).
>
> Of course, we have ready to use features, allowing to very easily
> install modules that we want.
>
> I named the prototype Karaf Decanter. I don't have preference about the
> name, and the location of the code (it could be as Karaf subproject like
> Cellar or Cave, or directly in the Karaf codebase).
>
> Thoughts ?
>
> Regards
> JB

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Guillaume Nodet-2
Great thx !

First technical question, can you explain what does the Map<Long, Map<String
, Object>> in the api interfaces (Collector, Appender, etc...) represents ?

Guillaume

2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:

> Oh by the way, I forgot the github link:
>
> https://github.com/jbonofre/karaf-decanter
>
> Sorry about that guys !
>
> Regards
> JB
>
>
> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>
>> Hi all,
>>
>> First of all, sorry for this long e-mail ;)
>>
>> Some weeks ago, I blogged about the usage of ELK
>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>> provide a monitoring dashboard (know what's happen in Karaf and be able
>> to store it for a long period):
>>
>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>
>>
>> If this solution works fine, there are some drawbacks:
>> - it requires additional middlewares on the machines. Additionally to
>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>> kibana console
>> - it's not usable "out of the box": you need at least to configure
>> logstash (with the different input/output plugins), kibana (to create
>> the dashboard that you need)
>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>> we want to be able to raise some alerts depending of some events (for
>> instance, when a regex is match in the log messages, when a feature is
>> uninstalled, when a JMX metric is greater than a given value, etc)
>>
>> Actually, Karaf (and related projects) already provides most (all) data
>> required for the monitoring. However, it would be very helpful to have a
>> "glue", ready to use and more user friendly, including a storage of the
>> metrics/monitoring data.
>>
>> Regarding this, I started a prototype of a monitoring solution for Karaf
>> and the applications running in Karaf.
>> The purpose is to be very extendible, flexible, easy to install and use.
>>
>> In term of architecture, we can find the following component:
>>
>> 1/ Collectors & SLA Policies
>> The collectors are services responsible of harvesting monitoring data.
>> We have two kinds of collectors:
>> - the polling collectors are invoked by a scheduler periodically.
>> - the event driven collectors react to some events.
>> Two collectors are already available:
>> - the JMX collector is a polling collector which harvest all MBeans
>> attributes
>> - the Log collector is a event driven collector, implementing a
>> PaxAppender which react when a log message occurs
>> We can planned the following collectors:
>> - a Camel Tracer collector would be an event driven collector, acting as
>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>
>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>> new custom collector (user/custom implementation).
>>
>> The Collectors are also responsible of checking the SLA. As the SLA
>> policies are tight to the collected data, it makes sense that the
>> collector validates the SLA and call/delegate the alert to SLA services.
>>
>> 2/ Scheduler
>> The scheduler service is responsible to call the Polling Collectors,
>> gather the harvested data, and delegate to the dispatcher.
>> We already have a simple scheduler (just a thread), but we can plan a
>> quartz scheduler (for advanced cron/trigger configuration), and another
>> one leveraging the Karaf scheduler.
>>
>> 3/ Dispatcher
>> The dispatcher is called by the scheduler or the event driven collectors
>> to dispatch the collected data to the appenders.
>>
>> 4/ Appenders
>> The appender services are responsible to send/store the collected data
>> to target systems.
>> For now, we have two appenders:
>> - a log appender which just log the collected data
>> - a elasticsearch appender which send the collected data to a
>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>> in Karaf (it's mostly done).
>> We can plan the following other appenders:
>> - redis to send the collected data in Redis messaging system
>> - jdbc to store the collected data in a database
>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>> route (it would create an internal route)
>>
>> 5/ Console/Kibana
>> The console is composed by two parts:
>> - a angularjs or bootstrap layer allowing to configure the SLA and
>> global settings
>> - embedded kibana instance with pre-configured dashboard (when the
>> elasticsearch appender is used). We will have a set of already created
>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>> template. The kibana instance will be embedded in Karaf (not external).
>>
>> Of course, we have ready to use features, allowing to very easily
>> install modules that we want.
>>
>> I named the prototype Karaf Decanter. I don't have preference about the
>> name, and the location of the code (it could be as Karaf subproject like
>> Cellar or Cave, or directly in the Karaf codebase).
>>
>> Thoughts ?
>>
>> Regards
>> JB
>>
>
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

jbonofre
It's the collected data.

Basically, it's:

- timestamp of the data/metric
- map of key/value (for instance, JMX Attribute Name => JMX Attribute Value)

Regards
JB

On 10/15/2014 11:11 AM, Guillaume Nodet wrote:

> Great thx !
>
> First technical question, can you explain what does the Map<Long, Map<String
> , Object>> in the api interfaces (Collector, Appender, etc...) represents ?
>
> Guillaume
>
> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>
>> Oh by the way, I forgot the github link:
>>
>> https://github.com/jbonofre/karaf-decanter
>>
>> Sorry about that guys !
>>
>> Regards
>> JB
>>
>>
>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>
>>> Hi all,
>>>
>>> First of all, sorry for this long e-mail ;)
>>>
>>> Some weeks ago, I blogged about the usage of ELK
>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>> provide a monitoring dashboard (know what's happen in Karaf and be able
>>> to store it for a long period):
>>>
>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>
>>>
>>> If this solution works fine, there are some drawbacks:
>>> - it requires additional middlewares on the machines. Additionally to
>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>> kibana console
>>> - it's not usable "out of the box": you need at least to configure
>>> logstash (with the different input/output plugins), kibana (to create
>>> the dashboard that you need)
>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>> we want to be able to raise some alerts depending of some events (for
>>> instance, when a regex is match in the log messages, when a feature is
>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>
>>> Actually, Karaf (and related projects) already provides most (all) data
>>> required for the monitoring. However, it would be very helpful to have a
>>> "glue", ready to use and more user friendly, including a storage of the
>>> metrics/monitoring data.
>>>
>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>> and the applications running in Karaf.
>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>
>>> In term of architecture, we can find the following component:
>>>
>>> 1/ Collectors & SLA Policies
>>> The collectors are services responsible of harvesting monitoring data.
>>> We have two kinds of collectors:
>>> - the polling collectors are invoked by a scheduler periodically.
>>> - the event driven collectors react to some events.
>>> Two collectors are already available:
>>> - the JMX collector is a polling collector which harvest all MBeans
>>> attributes
>>> - the Log collector is a event driven collector, implementing a
>>> PaxAppender which react when a log message occurs
>>> We can planned the following collectors:
>>> - a Camel Tracer collector would be an event driven collector, acting as
>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>
>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>> new custom collector (user/custom implementation).
>>>
>>> The Collectors are also responsible of checking the SLA. As the SLA
>>> policies are tight to the collected data, it makes sense that the
>>> collector validates the SLA and call/delegate the alert to SLA services.
>>>
>>> 2/ Scheduler
>>> The scheduler service is responsible to call the Polling Collectors,
>>> gather the harvested data, and delegate to the dispatcher.
>>> We already have a simple scheduler (just a thread), but we can plan a
>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>> one leveraging the Karaf scheduler.
>>>
>>> 3/ Dispatcher
>>> The dispatcher is called by the scheduler or the event driven collectors
>>> to dispatch the collected data to the appenders.
>>>
>>> 4/ Appenders
>>> The appender services are responsible to send/store the collected data
>>> to target systems.
>>> For now, we have two appenders:
>>> - a log appender which just log the collected data
>>> - a elasticsearch appender which send the collected data to a
>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>>> in Karaf (it's mostly done).
>>> We can plan the following other appenders:
>>> - redis to send the collected data in Redis messaging system
>>> - jdbc to store the collected data in a database
>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>> route (it would create an internal route)
>>>
>>> 5/ Console/Kibana
>>> The console is composed by two parts:
>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>> global settings
>>> - embedded kibana instance with pre-configured dashboard (when the
>>> elasticsearch appender is used). We will have a set of already created
>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>> template. The kibana instance will be embedded in Karaf (not external).
>>>
>>> Of course, we have ready to use features, allowing to very easily
>>> install modules that we want.
>>>
>>> I named the prototype Karaf Decanter. I don't have preference about the
>>> name, and the location of the code (it could be as Karaf subproject like
>>> Cellar or Cave, or directly in the Karaf codebase).
>>>
>>> Thoughts ?
>>>
>>> Regards
>>> JB
>>>
>>
>> --
>> Jean-Baptiste Onofré
>> [hidden email]
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Charles Moulliard-2
Hi Jean Baptiste,

I like the project name "decanter = décanter in French I suspect". Can you
(when you will have the time of course) add a README.md or README.adoc file
to your project (github) to explain what it does, ... ?

Regards,

On Wed, Oct 15, 2014 at 11:12 AM, Jean-Baptiste Onofré <[hidden email]>
wrote:

> It's the collected data.
>
> Basically, it's:
>
> - timestamp of the data/metric
> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
> Value)
>
> Regards
> JB
>
>
> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>
>> Great thx !
>>
>> First technical question, can you explain what does the Map<Long,
>> Map<String
>> , Object>> in the api interfaces (Collector, Appender, etc...) represents
>> ?
>>
>> Guillaume
>>
>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>>
>>  Oh by the way, I forgot the github link:
>>>
>>> https://github.com/jbonofre/karaf-decanter
>>>
>>> Sorry about that guys !
>>>
>>> Regards
>>> JB
>>>
>>>
>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>
>>>  Hi all,
>>>>
>>>> First of all, sorry for this long e-mail ;)
>>>>
>>>> Some weeks ago, I blogged about the usage of ELK
>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>>> provide a monitoring dashboard (know what's happen in Karaf and be able
>>>> to store it for a long period):
>>>>
>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>
>>>>
>>>> If this solution works fine, there are some drawbacks:
>>>> - it requires additional middlewares on the machines. Additionally to
>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>> kibana console
>>>> - it's not usable "out of the box": you need at least to configure
>>>> logstash (with the different input/output plugins), kibana (to create
>>>> the dashboard that you need)
>>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>>> we want to be able to raise some alerts depending of some events (for
>>>> instance, when a regex is match in the log messages, when a feature is
>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>
>>>> Actually, Karaf (and related projects) already provides most (all) data
>>>> required for the monitoring. However, it would be very helpful to have a
>>>> "glue", ready to use and more user friendly, including a storage of the
>>>> metrics/monitoring data.
>>>>
>>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>>> and the applications running in Karaf.
>>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>>
>>>> In term of architecture, we can find the following component:
>>>>
>>>> 1/ Collectors & SLA Policies
>>>> The collectors are services responsible of harvesting monitoring data.
>>>> We have two kinds of collectors:
>>>> - the polling collectors are invoked by a scheduler periodically.
>>>> - the event driven collectors react to some events.
>>>> Two collectors are already available:
>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>> attributes
>>>> - the Log collector is a event driven collector, implementing a
>>>> PaxAppender which react when a log message occurs
>>>> We can planned the following collectors:
>>>> - a Camel Tracer collector would be an event driven collector, acting as
>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>
>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>> new custom collector (user/custom implementation).
>>>>
>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>> policies are tight to the collected data, it makes sense that the
>>>> collector validates the SLA and call/delegate the alert to SLA services.
>>>>
>>>> 2/ Scheduler
>>>> The scheduler service is responsible to call the Polling Collectors,
>>>> gather the harvested data, and delegate to the dispatcher.
>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>>> one leveraging the Karaf scheduler.
>>>>
>>>> 3/ Dispatcher
>>>> The dispatcher is called by the scheduler or the event driven collectors
>>>> to dispatch the collected data to the appenders.
>>>>
>>>> 4/ Appenders
>>>> The appender services are responsible to send/store the collected data
>>>> to target systems.
>>>> For now, we have two appenders:
>>>> - a log appender which just log the collected data
>>>> - a elasticsearch appender which send the collected data to a
>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>>>> in Karaf (it's mostly done).
>>>> We can plan the following other appenders:
>>>> - redis to send the collected data in Redis messaging system
>>>> - jdbc to store the collected data in a database
>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>>> route (it would create an internal route)
>>>>
>>>> 5/ Console/Kibana
>>>> The console is composed by two parts:
>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>> global settings
>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>> elasticsearch appender is used). We will have a set of already created
>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>> template. The kibana instance will be embedded in Karaf (not external).
>>>>
>>>> Of course, we have ready to use features, allowing to very easily
>>>> install modules that we want.
>>>>
>>>> I named the prototype Karaf Decanter. I don't have preference about the
>>>> name, and the location of the code (it could be as Karaf subproject like
>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>
>>>> Thoughts ?
>>>>
>>>> Regards
>>>> JB
>>>>
>>>>
>>> --
>>> Jean-Baptiste Onofré
>>> [hidden email]
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>>
>>
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



--
Charles Moulliard
Apache Committer / Architect @RedHat
Twitter : @cmoulliard | Blog :  http://cmoulliard.github.io
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

jbonofre
Hi Charles,

Good idea yes, I will prepare a README.md for the vote.

Regards
JB

On 10/15/2014 11:18 AM, Charles Moulliard wrote:

> Hi Jean Baptiste,
>
> I like the project name "decanter = décanter in French I suspect". Can you
> (when you will have the time of course) add a README.md or README.adoc file
> to your project (github) to explain what it does, ... ?
>
> Regards,
>
> On Wed, Oct 15, 2014 at 11:12 AM, Jean-Baptiste Onofré <[hidden email]>
> wrote:
>
>> It's the collected data.
>>
>> Basically, it's:
>>
>> - timestamp of the data/metric
>> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
>> Value)
>>
>> Regards
>> JB
>>
>>
>> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>>
>>> Great thx !
>>>
>>> First technical question, can you explain what does the Map<Long,
>>> Map<String
>>> , Object>> in the api interfaces (Collector, Appender, etc...) represents
>>> ?
>>>
>>> Guillaume
>>>
>>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>>>
>>>   Oh by the way, I forgot the github link:
>>>>
>>>> https://github.com/jbonofre/karaf-decanter
>>>>
>>>> Sorry about that guys !
>>>>
>>>> Regards
>>>> JB
>>>>
>>>>
>>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>>
>>>>   Hi all,
>>>>>
>>>>> First of all, sorry for this long e-mail ;)
>>>>>
>>>>> Some weeks ago, I blogged about the usage of ELK
>>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>>>> provide a monitoring dashboard (know what's happen in Karaf and be able
>>>>> to store it for a long period):
>>>>>
>>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>>
>>>>>
>>>>> If this solution works fine, there are some drawbacks:
>>>>> - it requires additional middlewares on the machines. Additionally to
>>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>>> kibana console
>>>>> - it's not usable "out of the box": you need at least to configure
>>>>> logstash (with the different input/output plugins), kibana (to create
>>>>> the dashboard that you need)
>>>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>>>> we want to be able to raise some alerts depending of some events (for
>>>>> instance, when a regex is match in the log messages, when a feature is
>>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>>
>>>>> Actually, Karaf (and related projects) already provides most (all) data
>>>>> required for the monitoring. However, it would be very helpful to have a
>>>>> "glue", ready to use and more user friendly, including a storage of the
>>>>> metrics/monitoring data.
>>>>>
>>>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>>>> and the applications running in Karaf.
>>>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>>>
>>>>> In term of architecture, we can find the following component:
>>>>>
>>>>> 1/ Collectors & SLA Policies
>>>>> The collectors are services responsible of harvesting monitoring data.
>>>>> We have two kinds of collectors:
>>>>> - the polling collectors are invoked by a scheduler periodically.
>>>>> - the event driven collectors react to some events.
>>>>> Two collectors are already available:
>>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>>> attributes
>>>>> - the Log collector is a event driven collector, implementing a
>>>>> PaxAppender which react when a log message occurs
>>>>> We can planned the following collectors:
>>>>> - a Camel Tracer collector would be an event driven collector, acting as
>>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>>
>>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>>> new custom collector (user/custom implementation).
>>>>>
>>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>>> policies are tight to the collected data, it makes sense that the
>>>>> collector validates the SLA and call/delegate the alert to SLA services.
>>>>>
>>>>> 2/ Scheduler
>>>>> The scheduler service is responsible to call the Polling Collectors,
>>>>> gather the harvested data, and delegate to the dispatcher.
>>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>>>> one leveraging the Karaf scheduler.
>>>>>
>>>>> 3/ Dispatcher
>>>>> The dispatcher is called by the scheduler or the event driven collectors
>>>>> to dispatch the collected data to the appenders.
>>>>>
>>>>> 4/ Appenders
>>>>> The appender services are responsible to send/store the collected data
>>>>> to target systems.
>>>>> For now, we have two appenders:
>>>>> - a log appender which just log the collected data
>>>>> - a elasticsearch appender which send the collected data to a
>>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>>>>> in Karaf (it's mostly done).
>>>>> We can plan the following other appenders:
>>>>> - redis to send the collected data in Redis messaging system
>>>>> - jdbc to store the collected data in a database
>>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>>>> route (it would create an internal route)
>>>>>
>>>>> 5/ Console/Kibana
>>>>> The console is composed by two parts:
>>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>>> global settings
>>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>>> elasticsearch appender is used). We will have a set of already created
>>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>>> template. The kibana instance will be embedded in Karaf (not external).
>>>>>
>>>>> Of course, we have ready to use features, allowing to very easily
>>>>> install modules that we want.
>>>>>
>>>>> I named the prototype Karaf Decanter. I don't have preference about the
>>>>> name, and the location of the code (it could be as Karaf subproject like
>>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>>
>>>>> Thoughts ?
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>>
>>>> --
>>>> Jean-Baptiste Onofré
>>>> [hidden email]
>>>> http://blog.nanthrax.net
>>>> Talend - http://www.talend.com
>>>>
>>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> [hidden email]
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>
>

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Charles Moulliard-2
I don't know if you have planned something like this but that could be
interesting to have a layer to plugin the backend (elasticsearch, ...) if
we would like to plug later on another no-sql backend (mongodb, influxdb,
...)

On Wed, Oct 15, 2014 at 11:20 AM, Jean-Baptiste Onofré <[hidden email]>
wrote:

> Hi Charles,
>
> Good idea yes, I will prepare a README.md for the vote.
>
> Regards
> JB
>
>
> On 10/15/2014 11:18 AM, Charles Moulliard wrote:
>
>> Hi Jean Baptiste,
>>
>> I like the project name "decanter = décanter in French I suspect". Can you
>> (when you will have the time of course) add a README.md or README.adoc
>> file
>> to your project (github) to explain what it does, ... ?
>>
>> Regards,
>>
>> On Wed, Oct 15, 2014 at 11:12 AM, Jean-Baptiste Onofré <[hidden email]>
>> wrote:
>>
>>  It's the collected data.
>>>
>>> Basically, it's:
>>>
>>> - timestamp of the data/metric
>>> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
>>> Value)
>>>
>>> Regards
>>> JB
>>>
>>>
>>> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>>>
>>>  Great thx !
>>>>
>>>> First technical question, can you explain what does the Map<Long,
>>>> Map<String
>>>> , Object>> in the api interfaces (Collector, Appender, etc...)
>>>> represents
>>>> ?
>>>>
>>>> Guillaume
>>>>
>>>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>>>>
>>>>   Oh by the way, I forgot the github link:
>>>>
>>>>>
>>>>> https://github.com/jbonofre/karaf-decanter
>>>>>
>>>>> Sorry about that guys !
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>>
>>>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>>>
>>>>>   Hi all,
>>>>>
>>>>>>
>>>>>> First of all, sorry for this long e-mail ;)
>>>>>>
>>>>>> Some weeks ago, I blogged about the usage of ELK
>>>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>>>>> provide a monitoring dashboard (know what's happen in Karaf and be
>>>>>> able
>>>>>> to store it for a long period):
>>>>>>
>>>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>>>
>>>>>>
>>>>>> If this solution works fine, there are some drawbacks:
>>>>>> - it requires additional middlewares on the machines. Additionally to
>>>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>>>> kibana console
>>>>>> - it's not usable "out of the box": you need at least to configure
>>>>>> logstash (with the different input/output plugins), kibana (to create
>>>>>> the dashboard that you need)
>>>>>> - it doesn't cover all the monitoring needs, especially in term of
>>>>>> SLA:
>>>>>> we want to be able to raise some alerts depending of some events (for
>>>>>> instance, when a regex is match in the log messages, when a feature is
>>>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>>>
>>>>>> Actually, Karaf (and related projects) already provides most (all)
>>>>>> data
>>>>>> required for the monitoring. However, it would be very helpful to
>>>>>> have a
>>>>>> "glue", ready to use and more user friendly, including a storage of
>>>>>> the
>>>>>> metrics/monitoring data.
>>>>>>
>>>>>> Regarding this, I started a prototype of a monitoring solution for
>>>>>> Karaf
>>>>>> and the applications running in Karaf.
>>>>>> The purpose is to be very extendible, flexible, easy to install and
>>>>>> use.
>>>>>>
>>>>>> In term of architecture, we can find the following component:
>>>>>>
>>>>>> 1/ Collectors & SLA Policies
>>>>>> The collectors are services responsible of harvesting monitoring data.
>>>>>> We have two kinds of collectors:
>>>>>> - the polling collectors are invoked by a scheduler periodically.
>>>>>> - the event driven collectors react to some events.
>>>>>> Two collectors are already available:
>>>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>>>> attributes
>>>>>> - the Log collector is a event driven collector, implementing a
>>>>>> PaxAppender which react when a log message occurs
>>>>>> We can planned the following collectors:
>>>>>> - a Camel Tracer collector would be an event driven collector, acting
>>>>>> as
>>>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>>>
>>>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>>>> new custom collector (user/custom implementation).
>>>>>>
>>>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>>>> policies are tight to the collected data, it makes sense that the
>>>>>> collector validates the SLA and call/delegate the alert to SLA
>>>>>> services.
>>>>>>
>>>>>> 2/ Scheduler
>>>>>> The scheduler service is responsible to call the Polling Collectors,
>>>>>> gather the harvested data, and delegate to the dispatcher.
>>>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>>>> quartz scheduler (for advanced cron/trigger configuration), and
>>>>>> another
>>>>>> one leveraging the Karaf scheduler.
>>>>>>
>>>>>> 3/ Dispatcher
>>>>>> The dispatcher is called by the scheduler or the event driven
>>>>>> collectors
>>>>>> to dispatch the collected data to the appenders.
>>>>>>
>>>>>> 4/ Appenders
>>>>>> The appender services are responsible to send/store the collected data
>>>>>> to target systems.
>>>>>> For now, we have two appenders:
>>>>>> - a log appender which just log the collected data
>>>>>> - a elasticsearch appender which send the collected data to a
>>>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>>>> I'm working on an elasticsearch feature allowing to embed
>>>>>> elasticsearch
>>>>>> in Karaf (it's mostly done).
>>>>>> We can plan the following other appenders:
>>>>>> - redis to send the collected data in Redis messaging system
>>>>>> - jdbc to store the collected data in a database
>>>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint
>>>>>> of a
>>>>>> route (it would create an internal route)
>>>>>>
>>>>>> 5/ Console/Kibana
>>>>>> The console is composed by two parts:
>>>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>>>> global settings
>>>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>>>> elasticsearch appender is used). We will have a set of already created
>>>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>>>> template. The kibana instance will be embedded in Karaf (not
>>>>>> external).
>>>>>>
>>>>>> Of course, we have ready to use features, allowing to very easily
>>>>>> install modules that we want.
>>>>>>
>>>>>> I named the prototype Karaf Decanter. I don't have preference about
>>>>>> the
>>>>>> name, and the location of the code (it could be as Karaf subproject
>>>>>> like
>>>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>>>
>>>>>> Thoughts ?
>>>>>>
>>>>>> Regards
>>>>>> JB
>>>>>>
>>>>>>
>>>>>>  --
>>>>> Jean-Baptiste Onofré
>>>>> [hidden email]
>>>>> http://blog.nanthrax.net
>>>>> Talend - http://www.talend.com
>>>>>
>>>>>
>>>>>
>>>>  --
>>> Jean-Baptiste Onofré
>>> [hidden email]
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>>
>>
>>
>>
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



--
Charles Moulliard
Apache Committer / Architect @RedHat
Twitter : @cmoulliard | Blog :  http://cmoulliard.github.io
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Achim Nierbeck
Hi Charles,

I think it's what JB proposed in

4/ Appenders

> The appender services are responsible to send/store the collected data to
> target systems.
> For now, we have two appenders:
> - a log appender which just log the collected data
> - a elasticsearch appender which send the collected data to a
> elasticsearch instance. For now, it uses "external" elasticsearch, but I'm
> working on an elasticsearch feature allowing to embed elasticsearch in
> Karaf (it's mostly done).
> We can plan the following other appenders:
> - redis to send the collected data in Redis messaging system
> - jdbc to store the collected data in a database
> - jms to send the collected data to a JMS broker (like ActiveMQ)
> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
> route (it would create an internal route)


so yes, this sounds to me like a good idea.
As usual make everything OSGi like, preferable services and therefore
extendable/exchangeable :)

regards, Achim


2014-10-15 11:27 GMT+02:00 Charles Moulliard <[hidden email]>:

> I don't know if you have planned something like this but that could be
> interesting to have a layer to plugin the backend (elasticsearch, ...) if
> we would like to plug later on another no-sql backend (mongodb, influxdb,
> ...)
>
> On Wed, Oct 15, 2014 at 11:20 AM, Jean-Baptiste Onofré <[hidden email]>
> wrote:
>
> > Hi Charles,
> >
> > Good idea yes, I will prepare a README.md for the vote.
> >
> > Regards
> > JB
> >
> >
> > On 10/15/2014 11:18 AM, Charles Moulliard wrote:
> >
> >> Hi Jean Baptiste,
> >>
> >> I like the project name "decanter = décanter in French I suspect". Can
> you
> >> (when you will have the time of course) add a README.md or README.adoc
> >> file
> >> to your project (github) to explain what it does, ... ?
> >>
> >> Regards,
> >>
> >> On Wed, Oct 15, 2014 at 11:12 AM, Jean-Baptiste Onofré <[hidden email]
> >
> >> wrote:
> >>
> >>  It's the collected data.
> >>>
> >>> Basically, it's:
> >>>
> >>> - timestamp of the data/metric
> >>> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
> >>> Value)
> >>>
> >>> Regards
> >>> JB
> >>>
> >>>
> >>> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
> >>>
> >>>  Great thx !
> >>>>
> >>>> First technical question, can you explain what does the Map<Long,
> >>>> Map<String
> >>>> , Object>> in the api interfaces (Collector, Appender, etc...)
> >>>> represents
> >>>> ?
> >>>>
> >>>> Guillaume
> >>>>
> >>>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
> >>>>
> >>>>   Oh by the way, I forgot the github link:
> >>>>
> >>>>>
> >>>>> https://github.com/jbonofre/karaf-decanter
> >>>>>
> >>>>> Sorry about that guys !
> >>>>>
> >>>>> Regards
> >>>>> JB
> >>>>>
> >>>>>
> >>>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
> >>>>>
> >>>>>   Hi all,
> >>>>>
> >>>>>>
> >>>>>> First of all, sorry for this long e-mail ;)
> >>>>>>
> >>>>>> Some weeks ago, I blogged about the usage of ELK
> >>>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> >>>>>> provide a monitoring dashboard (know what's happen in Karaf and be
> >>>>>> able
> >>>>>> to store it for a long period):
> >>>>>>
> >>>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
> >>>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
> >>>>>>
> >>>>>>
> >>>>>> If this solution works fine, there are some drawbacks:
> >>>>>> - it requires additional middlewares on the machines. Additionally
> to
> >>>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
> >>>>>> kibana console
> >>>>>> - it's not usable "out of the box": you need at least to configure
> >>>>>> logstash (with the different input/output plugins), kibana (to
> create
> >>>>>> the dashboard that you need)
> >>>>>> - it doesn't cover all the monitoring needs, especially in term of
> >>>>>> SLA:
> >>>>>> we want to be able to raise some alerts depending of some events
> (for
> >>>>>> instance, when a regex is match in the log messages, when a feature
> is
> >>>>>> uninstalled, when a JMX metric is greater than a given value, etc)
> >>>>>>
> >>>>>> Actually, Karaf (and related projects) already provides most (all)
> >>>>>> data
> >>>>>> required for the monitoring. However, it would be very helpful to
> >>>>>> have a
> >>>>>> "glue", ready to use and more user friendly, including a storage of
> >>>>>> the
> >>>>>> metrics/monitoring data.
> >>>>>>
> >>>>>> Regarding this, I started a prototype of a monitoring solution for
> >>>>>> Karaf
> >>>>>> and the applications running in Karaf.
> >>>>>> The purpose is to be very extendible, flexible, easy to install and
> >>>>>> use.
> >>>>>>
> >>>>>> In term of architecture, we can find the following component:
> >>>>>>
> >>>>>> 1/ Collectors & SLA Policies
> >>>>>> The collectors are services responsible of harvesting monitoring
> data.
> >>>>>> We have two kinds of collectors:
> >>>>>> - the polling collectors are invoked by a scheduler periodically.
> >>>>>> - the event driven collectors react to some events.
> >>>>>> Two collectors are already available:
> >>>>>> - the JMX collector is a polling collector which harvest all MBeans
> >>>>>> attributes
> >>>>>> - the Log collector is a event driven collector, implementing a
> >>>>>> PaxAppender which react when a log message occurs
> >>>>>> We can planned the following collectors:
> >>>>>> - a Camel Tracer collector would be an event driven collector,
> acting
> >>>>>> as
> >>>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
> >>>>>>
> >>>>>> It's very dynamic (thanks to OSGi services), so it's possible to
> add a
> >>>>>> new custom collector (user/custom implementation).
> >>>>>>
> >>>>>> The Collectors are also responsible of checking the SLA. As the SLA
> >>>>>> policies are tight to the collected data, it makes sense that the
> >>>>>> collector validates the SLA and call/delegate the alert to SLA
> >>>>>> services.
> >>>>>>
> >>>>>> 2/ Scheduler
> >>>>>> The scheduler service is responsible to call the Polling Collectors,
> >>>>>> gather the harvested data, and delegate to the dispatcher.
> >>>>>> We already have a simple scheduler (just a thread), but we can plan
> a
> >>>>>> quartz scheduler (for advanced cron/trigger configuration), and
> >>>>>> another
> >>>>>> one leveraging the Karaf scheduler.
> >>>>>>
> >>>>>> 3/ Dispatcher
> >>>>>> The dispatcher is called by the scheduler or the event driven
> >>>>>> collectors
> >>>>>> to dispatch the collected data to the appenders.
> >>>>>>
> >>>>>> 4/ Appenders
> >>>>>> The appender services are responsible to send/store the collected
> data
> >>>>>> to target systems.
> >>>>>> For now, we have two appenders:
> >>>>>> - a log appender which just log the collected data
> >>>>>> - a elasticsearch appender which send the collected data to a
> >>>>>> elasticsearch instance. For now, it uses "external" elasticsearch,
> but
> >>>>>> I'm working on an elasticsearch feature allowing to embed
> >>>>>> elasticsearch
> >>>>>> in Karaf (it's mostly done).
> >>>>>> We can plan the following other appenders:
> >>>>>> - redis to send the collected data in Redis messaging system
> >>>>>> - jdbc to store the collected data in a database
> >>>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
> >>>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint
> >>>>>> of a
> >>>>>> route (it would create an internal route)
> >>>>>>
> >>>>>> 5/ Console/Kibana
> >>>>>> The console is composed by two parts:
> >>>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
> >>>>>> global settings
> >>>>>> - embedded kibana instance with pre-configured dashboard (when the
> >>>>>> elasticsearch appender is used). We will have a set of already
> created
> >>>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> >>>>>> template. The kibana instance will be embedded in Karaf (not
> >>>>>> external).
> >>>>>>
> >>>>>> Of course, we have ready to use features, allowing to very easily
> >>>>>> install modules that we want.
> >>>>>>
> >>>>>> I named the prototype Karaf Decanter. I don't have preference about
> >>>>>> the
> >>>>>> name, and the location of the code (it could be as Karaf subproject
> >>>>>> like
> >>>>>> Cellar or Cave, or directly in the Karaf codebase).
> >>>>>>
> >>>>>> Thoughts ?
> >>>>>>
> >>>>>> Regards
> >>>>>> JB
> >>>>>>
> >>>>>>
> >>>>>>  --
> >>>>> Jean-Baptiste Onofré
> >>>>> [hidden email]
> >>>>> http://blog.nanthrax.net
> >>>>> Talend - http://www.talend.com
> >>>>>
> >>>>>
> >>>>>
> >>>>  --
> >>> Jean-Baptiste Onofré
> >>> [hidden email]
> >>> http://blog.nanthrax.net
> >>> Talend - http://www.talend.com
> >>>
> >>>
> >>
> >>
> >>
> > --
> > Jean-Baptiste Onofré
> > [hidden email]
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
> >
>
>
>
> --
> Charles Moulliard
> Apache Committer / Architect @RedHat
> Twitter : @cmoulliard | Blog :  http://cmoulliard.github.io
>



--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

jbonofre
In reply to this post by Charles Moulliard-2
It's exactly the purpose of the appender: elasticsearch is one
appender/backend, but you have decanter-appender-log, and we can
implement decanter-appender-jdbc, decanter-appender-mongodb, etc. It's
just an Appender service to register: the dispatcher and the event
driven collectors will use them.

Regards
JB

On 10/15/2014 11:27 AM, Charles Moulliard wrote:

> I don't know if you have planned something like this but that could be
> interesting to have a layer to plugin the backend (elasticsearch, ...) if
> we would like to plug later on another no-sql backend (mongodb, influxdb,
> ...)
>
> On Wed, Oct 15, 2014 at 11:20 AM, Jean-Baptiste Onofré <[hidden email]>
> wrote:
>
>> Hi Charles,
>>
>> Good idea yes, I will prepare a README.md for the vote.
>>
>> Regards
>> JB
>>
>>
>> On 10/15/2014 11:18 AM, Charles Moulliard wrote:
>>
>>> Hi Jean Baptiste,
>>>
>>> I like the project name "decanter = décanter in French I suspect". Can you
>>> (when you will have the time of course) add a README.md or README.adoc
>>> file
>>> to your project (github) to explain what it does, ... ?
>>>
>>> Regards,
>>>
>>> On Wed, Oct 15, 2014 at 11:12 AM, Jean-Baptiste Onofré <[hidden email]>
>>> wrote:
>>>
>>>   It's the collected data.
>>>>
>>>> Basically, it's:
>>>>
>>>> - timestamp of the data/metric
>>>> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
>>>> Value)
>>>>
>>>> Regards
>>>> JB
>>>>
>>>>
>>>> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>>>>
>>>>   Great thx !
>>>>>
>>>>> First technical question, can you explain what does the Map<Long,
>>>>> Map<String
>>>>> , Object>> in the api interfaces (Collector, Appender, etc...)
>>>>> represents
>>>>> ?
>>>>>
>>>>> Guillaume
>>>>>
>>>>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>>>>>
>>>>>    Oh by the way, I forgot the github link:
>>>>>
>>>>>>
>>>>>> https://github.com/jbonofre/karaf-decanter
>>>>>>
>>>>>> Sorry about that guys !
>>>>>>
>>>>>> Regards
>>>>>> JB
>>>>>>
>>>>>>
>>>>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>>>>
>>>>>>    Hi all,
>>>>>>
>>>>>>>
>>>>>>> First of all, sorry for this long e-mail ;)
>>>>>>>
>>>>>>> Some weeks ago, I blogged about the usage of ELK
>>>>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>>>>>> provide a monitoring dashboard (know what's happen in Karaf and be
>>>>>>> able
>>>>>>> to store it for a long period):
>>>>>>>
>>>>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>>>>
>>>>>>>
>>>>>>> If this solution works fine, there are some drawbacks:
>>>>>>> - it requires additional middlewares on the machines. Additionally to
>>>>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>>>>> kibana console
>>>>>>> - it's not usable "out of the box": you need at least to configure
>>>>>>> logstash (with the different input/output plugins), kibana (to create
>>>>>>> the dashboard that you need)
>>>>>>> - it doesn't cover all the monitoring needs, especially in term of
>>>>>>> SLA:
>>>>>>> we want to be able to raise some alerts depending of some events (for
>>>>>>> instance, when a regex is match in the log messages, when a feature is
>>>>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>>>>
>>>>>>> Actually, Karaf (and related projects) already provides most (all)
>>>>>>> data
>>>>>>> required for the monitoring. However, it would be very helpful to
>>>>>>> have a
>>>>>>> "glue", ready to use and more user friendly, including a storage of
>>>>>>> the
>>>>>>> metrics/monitoring data.
>>>>>>>
>>>>>>> Regarding this, I started a prototype of a monitoring solution for
>>>>>>> Karaf
>>>>>>> and the applications running in Karaf.
>>>>>>> The purpose is to be very extendible, flexible, easy to install and
>>>>>>> use.
>>>>>>>
>>>>>>> In term of architecture, we can find the following component:
>>>>>>>
>>>>>>> 1/ Collectors & SLA Policies
>>>>>>> The collectors are services responsible of harvesting monitoring data.
>>>>>>> We have two kinds of collectors:
>>>>>>> - the polling collectors are invoked by a scheduler periodically.
>>>>>>> - the event driven collectors react to some events.
>>>>>>> Two collectors are already available:
>>>>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>>>>> attributes
>>>>>>> - the Log collector is a event driven collector, implementing a
>>>>>>> PaxAppender which react when a log message occurs
>>>>>>> We can planned the following collectors:
>>>>>>> - a Camel Tracer collector would be an event driven collector, acting
>>>>>>> as
>>>>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>>>>
>>>>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>>>>> new custom collector (user/custom implementation).
>>>>>>>
>>>>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>>>>> policies are tight to the collected data, it makes sense that the
>>>>>>> collector validates the SLA and call/delegate the alert to SLA
>>>>>>> services.
>>>>>>>
>>>>>>> 2/ Scheduler
>>>>>>> The scheduler service is responsible to call the Polling Collectors,
>>>>>>> gather the harvested data, and delegate to the dispatcher.
>>>>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>>>>> quartz scheduler (for advanced cron/trigger configuration), and
>>>>>>> another
>>>>>>> one leveraging the Karaf scheduler.
>>>>>>>
>>>>>>> 3/ Dispatcher
>>>>>>> The dispatcher is called by the scheduler or the event driven
>>>>>>> collectors
>>>>>>> to dispatch the collected data to the appenders.
>>>>>>>
>>>>>>> 4/ Appenders
>>>>>>> The appender services are responsible to send/store the collected data
>>>>>>> to target systems.
>>>>>>> For now, we have two appenders:
>>>>>>> - a log appender which just log the collected data
>>>>>>> - a elasticsearch appender which send the collected data to a
>>>>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>>>>> I'm working on an elasticsearch feature allowing to embed
>>>>>>> elasticsearch
>>>>>>> in Karaf (it's mostly done).
>>>>>>> We can plan the following other appenders:
>>>>>>> - redis to send the collected data in Redis messaging system
>>>>>>> - jdbc to store the collected data in a database
>>>>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint
>>>>>>> of a
>>>>>>> route (it would create an internal route)
>>>>>>>
>>>>>>> 5/ Console/Kibana
>>>>>>> The console is composed by two parts:
>>>>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>>>>> global settings
>>>>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>>>>> elasticsearch appender is used). We will have a set of already created
>>>>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>>>>> template. The kibana instance will be embedded in Karaf (not
>>>>>>> external).
>>>>>>>
>>>>>>> Of course, we have ready to use features, allowing to very easily
>>>>>>> install modules that we want.
>>>>>>>
>>>>>>> I named the prototype Karaf Decanter. I don't have preference about
>>>>>>> the
>>>>>>> name, and the location of the code (it could be as Karaf subproject
>>>>>>> like
>>>>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>>>>
>>>>>>> Thoughts ?
>>>>>>>
>>>>>>> Regards
>>>>>>> JB
>>>>>>>
>>>>>>>
>>>>>>>   --
>>>>>> Jean-Baptiste Onofré
>>>>>> [hidden email]
>>>>>> http://blog.nanthrax.net
>>>>>> Talend - http://www.talend.com
>>>>>>
>>>>>>
>>>>>>
>>>>>   --
>>>> Jean-Baptiste Onofré
>>>> [hidden email]
>>>> http://blog.nanthrax.net
>>>> Talend - http://www.talend.com
>>>>
>>>>
>>>
>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> [hidden email]
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>
>

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

Guillaume Nodet-2
In reply to this post by jbonofre
I think this would need to be reworked a bit.
Especially, I don't think a Map keyed by the timestamp is correct, as you
could have multiple events collected with the same timestamp.

Basically, what I ended up with for fabric was the following data:

https://github.com/fabric8io/fabric8/blob/master/insight/insight-storage/src/main/java/io/fabric8/insight/storage/StorageService.java#L27
So
  * a "type" of events (for example, jetty, camel, metrics, etc...)
  * a "timestamp"
  * a blob of json data
This was specifically written for elasticsearch.

So in our latest experiments, we ended up passing a more structured object
instead of a blob of json data:

https://github.com/fabric8io/fabric8/blob/master/insight/insight-metrics-model/src/main/java/io/fabric8/insight/metrics/model/MetricsStorageService.java

The main problem I think, is that we may not want to store log events to
the same backend as metrics.

Also, I've been thinking about, is the relation with flume for aggregating
/ conveying data.  It could be useful in big deployments, so we need to
keep that in mind.


2014-10-15 11:12 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:

> It's the collected data.
>
> Basically, it's:
>
> - timestamp of the data/metric
> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
> Value)
>
> Regards
> JB
>
>
> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>
>> Great thx !
>>
>> First technical question, can you explain what does the Map<Long,
>> Map<String
>> , Object>> in the api interfaces (Collector, Appender, etc...) represents
>> ?
>>
>> Guillaume
>>
>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>>
>>  Oh by the way, I forgot the github link:
>>>
>>> https://github.com/jbonofre/karaf-decanter
>>>
>>> Sorry about that guys !
>>>
>>> Regards
>>> JB
>>>
>>>
>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>
>>>  Hi all,
>>>>
>>>> First of all, sorry for this long e-mail ;)
>>>>
>>>> Some weeks ago, I blogged about the usage of ELK
>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>>> provide a monitoring dashboard (know what's happen in Karaf and be able
>>>> to store it for a long period):
>>>>
>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>
>>>>
>>>> If this solution works fine, there are some drawbacks:
>>>> - it requires additional middlewares on the machines. Additionally to
>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>> kibana console
>>>> - it's not usable "out of the box": you need at least to configure
>>>> logstash (with the different input/output plugins), kibana (to create
>>>> the dashboard that you need)
>>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>>> we want to be able to raise some alerts depending of some events (for
>>>> instance, when a regex is match in the log messages, when a feature is
>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>
>>>> Actually, Karaf (and related projects) already provides most (all) data
>>>> required for the monitoring. However, it would be very helpful to have a
>>>> "glue", ready to use and more user friendly, including a storage of the
>>>> metrics/monitoring data.
>>>>
>>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>>> and the applications running in Karaf.
>>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>>
>>>> In term of architecture, we can find the following component:
>>>>
>>>> 1/ Collectors & SLA Policies
>>>> The collectors are services responsible of harvesting monitoring data.
>>>> We have two kinds of collectors:
>>>> - the polling collectors are invoked by a scheduler periodically.
>>>> - the event driven collectors react to some events.
>>>> Two collectors are already available:
>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>> attributes
>>>> - the Log collector is a event driven collector, implementing a
>>>> PaxAppender which react when a log message occurs
>>>> We can planned the following collectors:
>>>> - a Camel Tracer collector would be an event driven collector, acting as
>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>
>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>> new custom collector (user/custom implementation).
>>>>
>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>> policies are tight to the collected data, it makes sense that the
>>>> collector validates the SLA and call/delegate the alert to SLA services.
>>>>
>>>> 2/ Scheduler
>>>> The scheduler service is responsible to call the Polling Collectors,
>>>> gather the harvested data, and delegate to the dispatcher.
>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>>> one leveraging the Karaf scheduler.
>>>>
>>>> 3/ Dispatcher
>>>> The dispatcher is called by the scheduler or the event driven collectors
>>>> to dispatch the collected data to the appenders.
>>>>
>>>> 4/ Appenders
>>>> The appender services are responsible to send/store the collected data
>>>> to target systems.
>>>> For now, we have two appenders:
>>>> - a log appender which just log the collected data
>>>> - a elasticsearch appender which send the collected data to a
>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>>>> in Karaf (it's mostly done).
>>>> We can plan the following other appenders:
>>>> - redis to send the collected data in Redis messaging system
>>>> - jdbc to store the collected data in a database
>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>>> route (it would create an internal route)
>>>>
>>>> 5/ Console/Kibana
>>>> The console is composed by two parts:
>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>> global settings
>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>> elasticsearch appender is used). We will have a set of already created
>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>> template. The kibana instance will be embedded in Karaf (not external).
>>>>
>>>> Of course, we have ready to use features, allowing to very easily
>>>> install modules that we want.
>>>>
>>>> I named the prototype Karaf Decanter. I don't have preference about the
>>>> name, and the location of the code (it could be as Karaf subproject like
>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>
>>>> Thoughts ?
>>>>
>>>> Regards
>>>> JB
>>>>
>>>>
>>> --
>>> Jean-Baptiste Onofré
>>> [hidden email]
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>>
>>
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PROPOSAL] Karaf Decanter monitoring

jbonofre
Hi Guillaume,

I agree: the first prototype is really monitoring oriented. So it's
mostly metric and simple message oriented (that's why a map of
String/Object matched fine).
If we plan to be more generic (like Camel auditing feature), we should
have a more generic API.

Anyway, the purpose of the Collected Data Model is to not be tight to
the appender (backend). The purpose is that the collected data can be
store/send to any systems (elasticsearch, jdbc, zabbix, nagios, whatever).

A Collected Data Model (very generic), populated by the collectors, and
dispatched to the appenders makes sense.

Regards
JB

On 10/15/2014 12:04 PM, Guillaume Nodet wrote:

> I think this would need to be reworked a bit.
> Especially, I don't think a Map keyed by the timestamp is correct, as you
> could have multiple events collected with the same timestamp.
>
> Basically, what I ended up with for fabric was the following data:
>
> https://github.com/fabric8io/fabric8/blob/master/insight/insight-storage/src/main/java/io/fabric8/insight/storage/StorageService.java#L27
> So
>    * a "type" of events (for example, jetty, camel, metrics, etc...)
>    * a "timestamp"
>    * a blob of json data
> This was specifically written for elasticsearch.
>
> So in our latest experiments, we ended up passing a more structured object
> instead of a blob of json data:
>
> https://github.com/fabric8io/fabric8/blob/master/insight/insight-metrics-model/src/main/java/io/fabric8/insight/metrics/model/MetricsStorageService.java
>
> The main problem I think, is that we may not want to store log events to
> the same backend as metrics.
>
> Also, I've been thinking about, is the relation with flume for aggregating
> / conveying data.  It could be useful in big deployments, so we need to
> keep that in mind.
>
>
> 2014-10-15 11:12 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>
>> It's the collected data.
>>
>> Basically, it's:
>>
>> - timestamp of the data/metric
>> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
>> Value)
>>
>> Regards
>> JB
>>
>>
>> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>>
>>> Great thx !
>>>
>>> First technical question, can you explain what does the Map<Long,
>>> Map<String
>>> , Object>> in the api interfaces (Collector, Appender, etc...) represents
>>> ?
>>>
>>> Guillaume
>>>
>>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <[hidden email]>:
>>>
>>>   Oh by the way, I forgot the github link:
>>>>
>>>> https://github.com/jbonofre/karaf-decanter
>>>>
>>>> Sorry about that guys !
>>>>
>>>> Regards
>>>> JB
>>>>
>>>>
>>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>>
>>>>   Hi all,
>>>>>
>>>>> First of all, sorry for this long e-mail ;)
>>>>>
>>>>> Some weeks ago, I blogged about the usage of ELK
>>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>>>> provide a monitoring dashboard (know what's happen in Karaf and be able
>>>>> to store it for a long period):
>>>>>
>>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>>
>>>>>
>>>>> If this solution works fine, there are some drawbacks:
>>>>> - it requires additional middlewares on the machines. Additionally to
>>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>>> kibana console
>>>>> - it's not usable "out of the box": you need at least to configure
>>>>> logstash (with the different input/output plugins), kibana (to create
>>>>> the dashboard that you need)
>>>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>>>> we want to be able to raise some alerts depending of some events (for
>>>>> instance, when a regex is match in the log messages, when a feature is
>>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>>
>>>>> Actually, Karaf (and related projects) already provides most (all) data
>>>>> required for the monitoring. However, it would be very helpful to have a
>>>>> "glue", ready to use and more user friendly, including a storage of the
>>>>> metrics/monitoring data.
>>>>>
>>>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>>>> and the applications running in Karaf.
>>>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>>>
>>>>> In term of architecture, we can find the following component:
>>>>>
>>>>> 1/ Collectors & SLA Policies
>>>>> The collectors are services responsible of harvesting monitoring data.
>>>>> We have two kinds of collectors:
>>>>> - the polling collectors are invoked by a scheduler periodically.
>>>>> - the event driven collectors react to some events.
>>>>> Two collectors are already available:
>>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>>> attributes
>>>>> - the Log collector is a event driven collector, implementing a
>>>>> PaxAppender which react when a log message occurs
>>>>> We can planned the following collectors:
>>>>> - a Camel Tracer collector would be an event driven collector, acting as
>>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>>
>>>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>>>> new custom collector (user/custom implementation).
>>>>>
>>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>>> policies are tight to the collected data, it makes sense that the
>>>>> collector validates the SLA and call/delegate the alert to SLA services.
>>>>>
>>>>> 2/ Scheduler
>>>>> The scheduler service is responsible to call the Polling Collectors,
>>>>> gather the harvested data, and delegate to the dispatcher.
>>>>> We already have a simple scheduler (just a thread), but we can plan a
>>>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>>>> one leveraging the Karaf scheduler.
>>>>>
>>>>> 3/ Dispatcher
>>>>> The dispatcher is called by the scheduler or the event driven collectors
>>>>> to dispatch the collected data to the appenders.
>>>>>
>>>>> 4/ Appenders
>>>>> The appender services are responsible to send/store the collected data
>>>>> to target systems.
>>>>> For now, we have two appenders:
>>>>> - a log appender which just log the collected data
>>>>> - a elasticsearch appender which send the collected data to a
>>>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>>>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>>>>> in Karaf (it's mostly done).
>>>>> We can plan the following other appenders:
>>>>> - redis to send the collected data in Redis messaging system
>>>>> - jdbc to store the collected data in a database
>>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>>>> route (it would create an internal route)
>>>>>
>>>>> 5/ Console/Kibana
>>>>> The console is composed by two parts:
>>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>>> global settings
>>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>>> elasticsearch appender is used). We will have a set of already created
>>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>>> template. The kibana instance will be embedded in Karaf (not external).
>>>>>
>>>>> Of course, we have ready to use features, allowing to very easily
>>>>> install modules that we want.
>>>>>
>>>>> I named the prototype Karaf Decanter. I don't have preference about the
>>>>> name, and the location of the code (it could be as Karaf subproject like
>>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>>
>>>>> Thoughts ?
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>>
>>>> --
>>>> Jean-Baptiste Onofré
>>>> [hidden email]
>>>> http://blog.nanthrax.net
>>>> Talend - http://www.talend.com
>>>>
>>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> [hidden email]
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RESULT][PROPOSAL] Karaf Decanter monitoring

jbonofre
In reply to this post by jbonofre
Hi all,

this vote passed with only +1.

I will push my latest changes on the github, request the git repo to
INFRA (to push there and remove the github one), and create a component
in Jira.

Thanks all for your vote.

Regards
JB

On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:

> Hi all,
>
> First of all, sorry for this long e-mail ;)
>
> Some weeks ago, I blogged about the usage of ELK
> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> provide a monitoring dashboard (know what's happen in Karaf and be able
> to store it for a long period):
>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>
>
> If this solution works fine, there are some drawbacks:
> - it requires additional middlewares on the machines. Additionally to
> Karaf itself, we have to install logstash, elasticsearch nodes, and
> kibana console
> - it's not usable "out of the box": you need at least to configure
> logstash (with the different input/output plugins), kibana (to create
> the dashboard that you need)
> - it doesn't cover all the monitoring needs, especially in term of SLA:
> we want to be able to raise some alerts depending of some events (for
> instance, when a regex is match in the log messages, when a feature is
> uninstalled, when a JMX metric is greater than a given value, etc)
>
> Actually, Karaf (and related projects) already provides most (all) data
> required for the monitoring. However, it would be very helpful to have a
> "glue", ready to use and more user friendly, including a storage of the
> metrics/monitoring data.
>
> Regarding this, I started a prototype of a monitoring solution for Karaf
> and the applications running in Karaf.
> The purpose is to be very extendible, flexible, easy to install and use.
>
> In term of architecture, we can find the following component:
>
> 1/ Collectors & SLA Policies
> The collectors are services responsible of harvesting monitoring data.
> We have two kinds of collectors:
> - the polling collectors are invoked by a scheduler periodically.
> - the event driven collectors react to some events.
> Two collectors are already available:
> - the JMX collector is a polling collector which harvest all MBeans
> attributes
> - the Log collector is a event driven collector, implementing a
> PaxAppender which react when a log message occurs
> We can planned the following collectors:
> - a Camel Tracer collector would be an event driven collector, acting as
> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>
> It's very dynamic (thanks to OSGi services), so it's possible to add a
> new custom collector (user/custom implementation).
>
> The Collectors are also responsible of checking the SLA. As the SLA
> policies are tight to the collected data, it makes sense that the
> collector validates the SLA and call/delegate the alert to SLA services.
>
> 2/ Scheduler
> The scheduler service is responsible to call the Polling Collectors,
> gather the harvested data, and delegate to the dispatcher.
> We already have a simple scheduler (just a thread), but we can plan a
> quartz scheduler (for advanced cron/trigger configuration), and another
> one leveraging the Karaf scheduler.
>
> 3/ Dispatcher
> The dispatcher is called by the scheduler or the event driven collectors
> to dispatch the collected data to the appenders.
>
> 4/ Appenders
> The appender services are responsible to send/store the collected data
> to target systems.
> For now, we have two appenders:
> - a log appender which just log the collected data
> - a elasticsearch appender which send the collected data to a
> elasticsearch instance. For now, it uses "external" elasticsearch, but
> I'm working on an elasticsearch feature allowing to embed elasticsearch
> in Karaf (it's mostly done).
> We can plan the following other appenders:
> - redis to send the collected data in Redis messaging system
> - jdbc to store the collected data in a database
> - jms to send the collected data to a JMS broker (like ActiveMQ)
> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
> route (it would create an internal route)
>
> 5/ Console/Kibana
> The console is composed by two parts:
> - a angularjs or bootstrap layer allowing to configure the SLA and
> global settings
> - embedded kibana instance with pre-configured dashboard (when the
> elasticsearch appender is used). We will have a set of already created
> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> template. The kibana instance will be embedded in Karaf (not external).
>
> Of course, we have ready to use features, allowing to very easily
> install modules that we want.
>
> I named the prototype Karaf Decanter. I don't have preference about the
> name, and the location of the code (it could be as Karaf subproject like
> Cellar or Cave, or directly in the Karaf codebase).
>
> Thoughts ?
>
> Regards
> JB

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [RESULT][PROPOSAL] Karaf Decanter monitoring

jbonofre
Hi guys,

just a quick update about Karaf Decanter.

The INFRA created the git repository. I will cleanup the legal files,
and add the latest features. I will push to karaf-decanter later today.

Regards
JB

On 10/19/2014 09:18 PM, Jean-Baptiste Onofré wrote:

> Hi all,
>
> this vote passed with only +1.
>
> I will push my latest changes on the github, request the git repo to
> INFRA (to push there and remove the github one), and create a component
> in Jira.
>
> Thanks all for your vote.
>
> Regards
> JB
>
> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>> Hi all,
>>
>> First of all, sorry for this long e-mail ;)
>>
>> Some weeks ago, I blogged about the usage of ELK
>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>> provide a monitoring dashboard (know what's happen in Karaf and be able
>> to store it for a long period):
>>
>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>
>>
>>
>> If this solution works fine, there are some drawbacks:
>> - it requires additional middlewares on the machines. Additionally to
>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>> kibana console
>> - it's not usable "out of the box": you need at least to configure
>> logstash (with the different input/output plugins), kibana (to create
>> the dashboard that you need)
>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>> we want to be able to raise some alerts depending of some events (for
>> instance, when a regex is match in the log messages, when a feature is
>> uninstalled, when a JMX metric is greater than a given value, etc)
>>
>> Actually, Karaf (and related projects) already provides most (all) data
>> required for the monitoring. However, it would be very helpful to have a
>> "glue", ready to use and more user friendly, including a storage of the
>> metrics/monitoring data.
>>
>> Regarding this, I started a prototype of a monitoring solution for Karaf
>> and the applications running in Karaf.
>> The purpose is to be very extendible, flexible, easy to install and use.
>>
>> In term of architecture, we can find the following component:
>>
>> 1/ Collectors & SLA Policies
>> The collectors are services responsible of harvesting monitoring data.
>> We have two kinds of collectors:
>> - the polling collectors are invoked by a scheduler periodically.
>> - the event driven collectors react to some events.
>> Two collectors are already available:
>> - the JMX collector is a polling collector which harvest all MBeans
>> attributes
>> - the Log collector is a event driven collector, implementing a
>> PaxAppender which react when a log message occurs
>> We can planned the following collectors:
>> - a Camel Tracer collector would be an event driven collector, acting as
>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>
>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>> new custom collector (user/custom implementation).
>>
>> The Collectors are also responsible of checking the SLA. As the SLA
>> policies are tight to the collected data, it makes sense that the
>> collector validates the SLA and call/delegate the alert to SLA services.
>>
>> 2/ Scheduler
>> The scheduler service is responsible to call the Polling Collectors,
>> gather the harvested data, and delegate to the dispatcher.
>> We already have a simple scheduler (just a thread), but we can plan a
>> quartz scheduler (for advanced cron/trigger configuration), and another
>> one leveraging the Karaf scheduler.
>>
>> 3/ Dispatcher
>> The dispatcher is called by the scheduler or the event driven collectors
>> to dispatch the collected data to the appenders.
>>
>> 4/ Appenders
>> The appender services are responsible to send/store the collected data
>> to target systems.
>> For now, we have two appenders:
>> - a log appender which just log the collected data
>> - a elasticsearch appender which send the collected data to a
>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>> in Karaf (it's mostly done).
>> We can plan the following other appenders:
>> - redis to send the collected data in Redis messaging system
>> - jdbc to store the collected data in a database
>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>> route (it would create an internal route)
>>
>> 5/ Console/Kibana
>> The console is composed by two parts:
>> - a angularjs or bootstrap layer allowing to configure the SLA and
>> global settings
>> - embedded kibana instance with pre-configured dashboard (when the
>> elasticsearch appender is used). We will have a set of already created
>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>> template. The kibana instance will be embedded in Karaf (not external).
>>
>> Of course, we have ready to use features, allowing to very easily
>> install modules that we want.
>>
>> I named the prototype Karaf Decanter. I don't have preference about the
>> name, and the location of the code (it could be as Karaf subproject like
>> Cellar or Cave, or directly in the Karaf codebase).
>>
>> Thoughts ?
>>
>> Regards
>> JB
>

--
Jean-Baptiste Onofré
[hidden email]
http://blog.nanthrax.net
Talend - http://www.talend.com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [RESULT][PROPOSAL] Karaf Decanter monitoring

Achim Nierbeck
Hi JB,

awesome. Thanks for the feedback :-)

regards, Achim

2014-11-14 16:40 GMT+01:00 Jean-Baptiste Onofré <[hidden email]>:

> Hi guys,
>
> just a quick update about Karaf Decanter.
>
> The INFRA created the git repository. I will cleanup the legal files, and
> add the latest features. I will push to karaf-decanter later today.
>
> Regards
> JB
>
>
> On 10/19/2014 09:18 PM, Jean-Baptiste Onofré wrote:
>
>> Hi all,
>>
>> this vote passed with only +1.
>>
>> I will push my latest changes on the github, request the git repo to
>> INFRA (to push there and remove the github one), and create a component
>> in Jira.
>>
>> Thanks all for your vote.
>>
>> Regards
>> JB
>>
>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>
>>> Hi all,
>>>
>>> First of all, sorry for this long e-mail ;)
>>>
>>> Some weeks ago, I blogged about the usage of ELK
>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
>>> provide a monitoring dashboard (know what's happen in Karaf and be able
>>> to store it for a long period):
>>>
>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>
>>>
>>>
>>> If this solution works fine, there are some drawbacks:
>>> - it requires additional middlewares on the machines. Additionally to
>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>> kibana console
>>> - it's not usable "out of the box": you need at least to configure
>>> logstash (with the different input/output plugins), kibana (to create
>>> the dashboard that you need)
>>> - it doesn't cover all the monitoring needs, especially in term of SLA:
>>> we want to be able to raise some alerts depending of some events (for
>>> instance, when a regex is match in the log messages, when a feature is
>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>
>>> Actually, Karaf (and related projects) already provides most (all) data
>>> required for the monitoring. However, it would be very helpful to have a
>>> "glue", ready to use and more user friendly, including a storage of the
>>> metrics/monitoring data.
>>>
>>> Regarding this, I started a prototype of a monitoring solution for Karaf
>>> and the applications running in Karaf.
>>> The purpose is to be very extendible, flexible, easy to install and use.
>>>
>>> In term of architecture, we can find the following component:
>>>
>>> 1/ Collectors & SLA Policies
>>> The collectors are services responsible of harvesting monitoring data.
>>> We have two kinds of collectors:
>>> - the polling collectors are invoked by a scheduler periodically.
>>> - the event driven collectors react to some events.
>>> Two collectors are already available:
>>> - the JMX collector is a polling collector which harvest all MBeans
>>> attributes
>>> - the Log collector is a event driven collector, implementing a
>>> PaxAppender which react when a log message occurs
>>> We can planned the following collectors:
>>> - a Camel Tracer collector would be an event driven collector, acting as
>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>
>>> It's very dynamic (thanks to OSGi services), so it's possible to add a
>>> new custom collector (user/custom implementation).
>>>
>>> The Collectors are also responsible of checking the SLA. As the SLA
>>> policies are tight to the collected data, it makes sense that the
>>> collector validates the SLA and call/delegate the alert to SLA services.
>>>
>>> 2/ Scheduler
>>> The scheduler service is responsible to call the Polling Collectors,
>>> gather the harvested data, and delegate to the dispatcher.
>>> We already have a simple scheduler (just a thread), but we can plan a
>>> quartz scheduler (for advanced cron/trigger configuration), and another
>>> one leveraging the Karaf scheduler.
>>>
>>> 3/ Dispatcher
>>> The dispatcher is called by the scheduler or the event driven collectors
>>> to dispatch the collected data to the appenders.
>>>
>>> 4/ Appenders
>>> The appender services are responsible to send/store the collected data
>>> to target systems.
>>> For now, we have two appenders:
>>> - a log appender which just log the collected data
>>> - a elasticsearch appender which send the collected data to a
>>> elasticsearch instance. For now, it uses "external" elasticsearch, but
>>> I'm working on an elasticsearch feature allowing to embed elasticsearch
>>> in Karaf (it's mostly done).
>>> We can plan the following other appenders:
>>> - redis to send the collected data in Redis messaging system
>>> - jdbc to store the collected data in a database
>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
>>> route (it would create an internal route)
>>>
>>> 5/ Console/Kibana
>>> The console is composed by two parts:
>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>> global settings
>>> - embedded kibana instance with pre-configured dashboard (when the
>>> elasticsearch appender is used). We will have a set of already created
>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>> template. The kibana instance will be embedded in Karaf (not external).
>>>
>>> Of course, we have ready to use features, allowing to very easily
>>> install modules that we want.
>>>
>>> I named the prototype Karaf Decanter. I don't have preference about the
>>> name, and the location of the code (it could be as Karaf subproject like
>>> Cellar or Cave, or directly in the Karaf codebase).
>>>
>>> Thoughts ?
>>>
>>> Regards
>>> JB
>>>
>>
>>
> --
> Jean-Baptiste Onofré
> [hidden email]
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master
12
Loading...