![]() ![]() We want the count the number of times each source IP address appears in the log, so we will use the |stats count(src_ip) parameter. Building on our existing search and selected timerange from the timeline, we will send our 71,000 events and pipe them to the stats command. If it’s a single address, we’re going to see a pretty high count. One way we can do that is do a count for each IP address seen in this time range. So… using standardized field names, let’s see if this is the work of one address or multiple scanners. The CIM module will add the standardized fieldnames at search time so you only have to worry about src_ip and src_port and not having to remember 20 different logging formats. Another platorm may log with fieldnames like dst_address, dst_port, s_addr and s_prt. The CIM module will apply standard fieldnames for these values such as dest_ip, dest_port, src_ip and src_port. For example, my Palo Alto doesn’t even label the fields in the log, it’s sent via syslog as a long comma separated string. Is this a single IP address scanning me, or several? If you have the CIM module installed in Splunk (highly recommended), this will make life a lot easier as it will perform extractions on your data and use standardized field names that you can search on regardless of what platform generated the data. This is a pretty significant jump in traffic! The event count drops from 93,488 to 71,257. From here we can highlight the time range we are interested in by selecting the time range on the timeline that contains our spike in traffic. This is interesting, let’s dig into this and see what’s going on. You may have the data, but it’s useless if it takes Splunk forever to return results!įrom what we can see here, there are over 93,000 events my firewall logged over the past week, and looks like there was a big jump in the middle of the week. Making sure you are using the right time range is absolutely critical for getting relevant data from your searches, and for making them as efficient as possible. For example, if we wanted to search for events in the palo_alto index going back 1 week, we can either select “Last 7 days” in the time picker, or add earliest=-1w (or -7d works as well). ![]() This can be changed by either picking a new time range on the picker, or specifying an earliest and/or latest parameter in the search bar. Splunk’s time picker defaults to the last 24 hours. To specify which index to search, you specify on the search bar index=palo_alto I do like to keep similar logs together, but I have only one Palo Alto right now, so it goes into its own index. Maybe you will call it “firewall”, it doesn’t matter all that much. So let’s say you have your Palo Alto firewall syslogging traffic violations to your Splunk box, and you have Splunk set up to index that data into an index called “palo_alto”. It’s better to use as many of these fields as you can, but the two best fields to use if you can’t use all 4 are the index and time fields. Using these fields in your search queries will greatly speed up your searches as Splunk uses this metadata to determine which datasets it needs to look through. When data is indexed in Splunk, there are some basic default fields that are extracted: index, timestamp, sourcetype, and host. So you’ve built your lab, created a VM, and installed the Splunk package and you’re ready to start Finding Evil but you don’t know how? Never fear. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |