Fabrix.ai integration with Splunk ITSI - Meraki Walk-thru
1view
Fabrix.ai integration with Splunk ITSI - Meraki Walk-thru
ยทOct 23, 2025
Transcript
00:02
we will look at how Fabrics AI integration with Splunk ITSI enables automated onboarding for the Meraki domain.
00:10
Here we are looking at the Meraki network in the ITSI service map along with all the underlying services and the KPIs completely created automatically using Fabric's AI solution.
00:24
Let us see how this works.
00:26
The process involves three high level steps.
00:31
In the first step we will deploy the Meraki solution pack in Fabrics AI platform and this helps discovering all the Meraki infrastructure and the related telemetry data and performance metrics.
00:46
By the end of this step we will have completely discovered Meraki infrastructure along with the performance and telemetry data displayed in Fabrics AI dashboards and the data persisted in P streams.
01:01
In step number two we have continuous ingestion and mapping pipelines which take the data from Fabrics AI data stores and then copy the data into splunk indexes on a continuous basis using the HTTP event collector mechanism.
01:21
And in this step we can also enrich the data or perform some field mappings.
01:26
And in the final step we have a no code workflow which creates all the different artifacts in ITSI, especially things like ITSI services, ITSI, KPIs, entity types, vital metrics, correlation searches and notable aggregation policies.
01:49
All of these are automatically created within few minutes within the itsi.
01:54
By the end of the third step you will be able to see a completely populated ITSI service map for the Meraki domain.
02:04
Here we are looking at the step number one where we have deployed the Meraki solution pack and once it is activated, you could go to the configuration page, add the Meraki controller credential and then set up the discovery targets.
02:19
And then you can run the discovery which completes the data collection for all the Meraki devices as well as the performance metrics collection.
02:30
Now we are looking at the step number two where we have the continuous ingestion pipelines which take data from our P streams and then continuously copy that data into Splunk indexes.
02:45
As we can see here, Here we are looking at the final step where we orchestrate the Splunk ITSI artifacts creation process.
02:56
Here we can see that we are creating the Meraki services, we are creating the KPIs, and we are creating the service dependency.
03:04
We create the entity types and we also create the entity import jobs, the correlation search and the notable event aggregation policy.
03:13
And you can simply run it anytime like this.
03:17
Or you could also go and change the configurations.
03:21
if you want you can do some customizations by going into these specific tasks.
03:28
And here we have all the tasks listed here.
03:31
If you want any customization here you can simply double click on the task and then you can change the base search or any other parameters.
03:40
And if you want, you can simply drag and drop the nodes from here.
03:44
Here is the final outcome.
03:46
This is how it looks like in the Splunk itsi.
03:49
After all the three steps are complete, you will be able to see the Meraki service tree along with all the, dependent services.
04:01
And here we are able to see the KPIs along with the episode.
04:05
This concludes the short demonstration.
04:07
Thank you so much for tuning in.
04:09
See you in another video.
Comments
Comments
No comments yet
Fabrix.ai integration with Splunk ITSI - Meraki Walk-thru