Tech Blog

ProLUG SEC Unit 10 đź”’

Intro đź‘‹

This is the final Unit and close to the Linux Security Course. Though we did not have labs for this Unit, we did spend a lot of time reflecting.

Closing Thoughts

I recently finished the first ProLUG Security Engineering Course, designed and delivered by Scott Champine, also known as Het Tanis, from ProLUG. It ran for about 10 weeks and clocked in at roughly 100 hours of focused effort—but honestly, I probably put in more than that once you count the spontaneous study sessions and the many side discussions that came up. A small group of us showed up consistently and really dug into the material, connecting ideas and bouncing thoughts off each other.

The course itself was free and not tied to any official institution, but it was taught by a seasoned industry professional who also teaches at the post-secondary level. Scott clearly cares about the subject and about helping others understand it. That came through in how he delivered the material, and it brought out a real sense of commitment in us too.

On top of just taking the course, I also helped shape it for future learners by starting a version-controlled course book. We had a small group that met weekly to go over edits and review pull requests. A few people even joined just to learn Git so they could contribute, which added to the sense of shared effort and made the experience even better.

One of the things that helped me stay on track was having a study group. There’s a lot of sharp, motivated people in the ProLUG community, and quite a few of them kept up a steady pace through both the course and the book. The regular check-ins and shared discussions made a big difference.

The course itself covered a wide range of topics and gave me a stronger sense of how enterprise security is put together, maintained, and kept resilient. Security isn’t just about ticking boxes—it touches every part of a system. Especially with Linux, where multiple users and external inputs are constantly in play, it doesn’t take much for something to go sideways if you’re not paying attention.

We worked through the process of hardening Linux systems using STIGs—basically long, detailed lists of potential vulnerabilities and how to guard against them. It’s not fast work, but it really forces you to think about each configuration choice.

Patching was another major topic, and not in the usual “just update it” way. We talked about how every change introduces risk, and how important it is to approach patching as part of a controlled, planned process. That includes things like internal repositories, known-good system images, and minimizing surprise behavior from updates.

We also got hands-on with locking down systems: managing ingress and egress, shutting off unnecessary ports, setting up bastion hosts, and building out logging and alerting. We even worked on ways to trap misbehaving users or bots inside chroot jails. One of the others in the group even automated that process with a Bash script for their final project.

We had deep conversations about monitoring too—things like how to design alerts that people can actually respond to, without burning out from constant noise. We looked at log filtering, storage, and what makes a log useful rather than just more clutter.

We also talked about automation and how it can sometimes get away from you. It’s easy for parts of a system to drift out of spec if you’re not careful, especially with orchestration tools. So we looked at how to use infrastructure-as-code and version control to make changes traceable and systems more predictable.

Toward the end of the course, we focused on trust, keys, and certificates. We got practical—generating and managing key pairs, breaking them, fixing them, and eventually building up to TLS certificates. These exercises helped drive home how trust is managed inside systems, especially in setups that lean toward zero trust.

Before this course, I already had a decent background in cybersecurity—some labs, a few certifications—but this gave me something more solid. I now feel like I understand what it means to build security into a system, rather than just bolt it on. I’m more confident setting up and maintaining a hardened Linux environment, and more thoughtful about how to track and manage change over time.

That said, I don’t think I’ve “arrived.” If anything, this course just made me more aware of how much I still have to learn. I’ve moved into that space where I know what I don’t know, and that’s a valuable place to be. It’ll take years to keep digging through it all, but now I’ve got a better starting point—and the confidence to figure things out when new challenges come up.

All in all, this course gave me a deeper appreciation for operational security, and it left me with some solid tools I’ll continue to use. Like with the Admin course before it, I really valued the people I got to work with. I expect we’ll keep exploring these topics together for a long time. And, like always, you make a few good friends along the way.

Discussion Post 1

Question

  • How many new topics or concepts do you have to go read about now?

Answers

  • TLS Transport Layer Security: Prior to the course, I was aware of the terminology and had a 30,000 Foot conceptual view. During this course, I was able to zoom in an take a look at the transport and the layers. However, given the shear scale and complexity of the topic. I will have to read through the 1.1, 1.2 and 1.3 Specifications. One of my favorite IT Authors Michael W. Lucas has a book for sale on the topic. https://www.tiltedwindmillpress.com/product/tls/

  • ZT Zero Trust: I get from a high level view as well. I learned that Zero Trust is a popular buzz-word or a form of jargon for most. Actually drilling down and understanding the many forms and configurations of a ZT network is an immense undertaking. On the side I had done some additional reading about it, for example, I read through some of https://www.cisa.gov/sites/default/files/2023-04/CISA_Zero_Trust_Maturity_Model_Version_2_508c.pdf and plan to dig deeper into the subject.

  • Tokenization & Data Masking are two interesting topics. If anyone can recommend materials, I am interested. So far I have just found Wikipedia for the explanations.

  • SSO Single Sign On: I found this book: https://fw2s.com/wp-content/uploads/2017/09/definitive-guide-to-single-sign-on.pdf

  • DMARC Domain-based Message Authentication Reporting & Conformance: I do have the book Run your own mail server by Michael W. Lucas. I am assuming he covers the topic to some degree.

  • SPF Sender Policy Framework: Which should be covered by the aforementioned book as well.

  • CVSS Common Vulnerability Scoring System: I found this paper on the matter, it is a technical specification. https://www.first.org/cvss/v3-1/cvss-v31-specification_r1.pdf

  • TSDB's Time Series Databases: I have heard the concept from this course and in game development. I think the concept is easy to grasp. But I would like to investigate further.

Question

What was completely new to you?

Answer

  • STIGS for sure. Just prior to starting the course, I was given a glimpse of the Stig’ing process. In the course we were tasked with getting the StigViewer working, downloading specific STIG’s and Implementing hardening while answering prompts about what specifically we were doing. It really helped to have finished the admin course prior to this as it made the objectives more clear to me.

  • Bastions Hosts Prior to using the one implemented on Scott’ own server, I had not seen this. Drilling into the concept and creating a Bastion in the lab was a nice intro.

Question

  • What is something you heard before, but need to spend more time with?

Answer

  • I had heard the Acronyms for many of the concepts prior to this course. In my answer to Question #1, I had already detailed what I will need to dig into after the course is complete.

Discussion Post 2

Scenario

  • Think about how the course objectives apply to the things you’ve worked on.

Question

  • How would you answer if I asked you for a quick rundown of how you would secure a Linux system?

Answer

  • First, I’d check open ports using ss -ntulp to see what services are listening and close anything unnecessary.
  • Next, I’d check how many user accounts exist by running cat /etc/passwd | wc -l, and optionally review users with high UIDs to see who has real login access.
  • I’d confirm that root login over SSH is disabled by checking /etc/ssh/sshd_config and setting PermitRootLogin no.
  • Then I’d check for any accounts with empty passwords using awk -F: '($2 == "") { print $1 }' /etc/shadow.
  • I’d list which users have sudo access by checking the sudo group or reviewing /etc/sudoers.
  • I would review running services with systemctl list-units --type=service and disable anything that isn’t needed.
  • Then I’d make sure a firewall is enabled and configured, using firewalld, ufw, or iptables, depending on the system.
  • I’d update all packages using the system’s package manager like dnf, apt, or yum to ensure known vulnerabilities are patched.
  • I’d also check file permissions on sensitive files like /etc/shadow and /home/* directories.
  • If SSH is exposed, I’d install and configure fail2ban to protect against brute-force login attempts.
  • I’d regularly check system logs like /var/log/auth.log or use journalctl to spot anything suspicious.
  • Lastly, I’d run a tool like Ansible Lock-Down to audit and find common misconfigurations.

Question

  • How would you answer if I asked you why you are a good fit as a security engineer in my company?

Answer

Though I am not a seasoned Security Engineer, I possess a solid understanding of Linux, system hardening, and monitoring techniques, along with a strong foundation in high-level concepts related to ensuring security, reliability, and confidentiality in systems and networks. I am a diligent learner and a prolific documenter, always striving to deepen my knowledge and contribute meaningfully to operational resilience and security best practices.

Frame

  • Think about what security concepts you think bear the most weight as you put these course objectives onto your resume.

Question

  • Which would you include?

Answer

I would perhaps list generalities

  • Linux System Security Auditing
  • Linux System Hardening
  • Linux System Monitoring
  • Linux System Access Control
  • Encryption & Certificate Management
  • Infrastructure Security Governance Compliance

Question

Which don’t you feel comfortable including?

Answer

  • Network Security
  • Transport Layer Security

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis

ProLUG SEC Unit 9 đź”’

Intro đź‘‹

In this Unit we look at how Certificates and Keys go beyond Asymmetric encryption with Public / Private. We look at how multiple checks and multiple layers of trust must be used in this mad mad world. 1


Worksheet

Question

How do these topics align with what you already know about system security?

Answer

Well I had felt like I had a clear picture of Symmetric and Asymmetric encryption modalities. Furthermore, I had a strong prior understanding of x.509 and SSH where Asymmetric encryption is used. Moreover, the procedure of generating Private and subsequent public keys. However, the verbosity and complexity of the required reading has me scratching my head and looking at more sophisticated modality of key generation and exchange eg. TLS 1.2, 1.3

Question

Were any of the terms or concepts new to you?

Answer

key-transport and/or key-agreement protocols - a method of establishing a shared secret key between two or more parties where one party creates the key and securely delivers it to the others.

Challenge Values - dynamic, randomly generated numbers or strings used to initiate authentication.

nonce - a unique, random or pseudo-random number used to ensure the security and integrity of data transmitted over a network. Watch short video about CA and Chain of Trust Distributed Trust Model Review the TLS Overview section, pages 4-7 of https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-52r2.pdf and answer the following questions. What are the three subprotocols of TLS? How does TLS apply Confidentiality Integrity Authentication Anti-replay

Question

What are the three subprotocols of TLS?

Answer

The handshake used to negotiate the session parameters.

Change cipher spec used in TLS 1.0, 1.1, and 1.2 to change the cryptographic parameters of a session.

Alert protocols used to notify the other party of an error condition.

Question

How does TLS apply to: Confidentiality Integrity Authentication Anti-replay

Answer

Confidentiality

Confidentiality is provided for a communication session by the negotiated encryption algorithm for the cipher suite and the encryption keys derived from the master secret and random values.

Integrity

TLS uses a cipher suite of algorithms and functions, including key establishment, digital signature, confidentiality, and integrity algorithms. In TLS 1.3, the master secret is derived by iteratively invoking an extract-then-expand function with previously derived secrets, used by the negotiated security services to protect the data exchanged between the client and the serve. In TLS 1.3, only AEAD symmetric algorithms are used for confidentiality and integrity.

Authentication

Server authentication is performed by the client using the server’s public-key certificate, which the server presents during the handshake.

Anti-Replay

in TLS 1.3 The integrity-protected envelope of the message contains a monotonically increasing sequence number. Once the message integrity is verified, the sequence number of the current message is compared with the sequence number of the previous message.

Definitions

  • TLS (Transport Layer Security) A protocol that encrypts data in transit to ensure privacy and integrity.
  • Symmetric Keys A cryptographic method where the same key is used for both encryption and decryption.
  • Asymmetric Keys A method using a public/private key pair where one key encrypts and the other decrypts.
  • Non-Repudiation A guarantee that a sender cannot deny the authenticity of their message or signature.
  • Anti-Replay A mechanism that prevents attackers from reusing valid data packets to mimic legitimate transactions.
  • Plaintext Data in a readable and unencrypted format.
  • Cyphertext Data that has been encrypted and is unreadable without the correct decryption key.
  • Fingerprints Short unique representations (hashes) of public keys used to verify their authenticity.
  • Passphrase (in key generation) A user-supplied string that encrypts private keys to protect them from unauthorized access.

Lab đź§Ş

Assignment

Answer

  • We generated a 90 day TLS web client certificate. I saved a snippet of the options below.
Activation/Expiration time.
The certificate will expire in (days): 90
Extensions.
Does the certificate belong to an authority? (y/N): y
Path length constraint (decimal, -1 for no constraint): 
Is this a TLS web client certificate? (y/N): y
Will the certificate be used for IPsec IKE operations? (y/N): y
Is this a TLS web server certificate? (y/N): y
Enter a dnsName of the subject of the certificate: 
Enter a URI of the subject of the certificate: 
Enter the IP address of the subject of the certificate: 
Will the certificate be used for signing (DHE ciphersuites)? (Y/n): y
Will the certificate be used for encryption (RSA ciphersuites)? (Y/n): y
Will the certificate be used for data encryption? (y/N): y
Will the certificate be used to sign OCSP requests? (y/N): y
Will the certificate be used to sign code? (y/N): y
Will the certificate be used for time stamping? (y/N): y
Will the certificate be used for email protection? (y/N): y
Will the certificate be used to sign other certificates? (Y/n): y
Will the certificate be used to sign CRLs? (y/N): y
Will the certificate be used for signing (DHE ciphersuites)? (Y/n): y
Enter the URI of the CRL distribution point: 
X.509 Certificate Information:
Version: 3
Serial Number (hex): 32a1646105dcb6229eba87ad4c08a99a2bb92a99
Validity:
Not Before: Mon Jun 02 03:46:43 UTC 2025
Not After: Sun Aug 31 03:46:48 UTC 2025
Subject: O=prolug
Subject Public Key Algorithm: RSA
Algorithm Security Level: High (3072 bits)
Modulus (bits 3072):
00:e8:c7:f5:6e:7c:23:e3:7e:e7:d0:c5:c4:cf:c0:98
23:5f:1e:f6:5f:5d:87:c6:c8:18:13:cb:5e:1b:1a:88
03:98:4d:55:5d:4d:14:cc:78:8d:83:e3:c5:65:16:8c
41:a8:9f:32:ab:f4:47:3f:84:b2:b8:0d:7c:b3:a6:e7
21:59:13:d2:45:40:60:d6:2c:eb:5a:f3:00:0c:e7:36
06:0f:ca:51:04:92:06:91:80:f0:04:52:d2:66:e3:33
11:7b:8e:f7:e3:22:19:83:c8:dc:c8:f9:18:c7:51:4f
38:6a:d8:07:bf:12:02:f4:5e:0d:52:2e:cc:0b:4e:d9
e0:b2:07:9a:cd:39:99:a7:28:42:e4:67:b0:ff:04:2d
f9:13:8c:0f:19:b5:13:ee:59:a3:e7:e8:f7:a1:e9:92
2e:ce:49:23:3c:0a:b4:29:ca:5d:74:6e:9e:09:ea:fd
72:6a:89:6e:5f:29:d6:0a:44:98:1e:2c:39:66:44:11
4f:47:c5:64:a3:0c:84:2b:fd:32:2e:a9:ce:e7:be:b4
7c:3b:e6:b9:23:98:82:ac:86:20:07:4e:59:84:4d:0c
02:38:76:87:ef:f8:17:05:5b:93:79:25:73:fc:18:f5
4e:1d:ff:84:45:10:7d:46:51:69:ae:73:6d:e9:1e:fd
ff:55:5a:78:4d:f6:cd:44:af:22:0f:b0:18:fb:82:b9
f6:aa:3d:2a:08:00:62:d1:9b:28:50:94:39:98:f5:de
f9:cf:3f:d8:ae:72:68:69:f1:46:97:8f:d5:a6:9a:3e
4c:57:37:5f:69:0e:2f:4e:b6:6e:65:a5:2c:f0:5b:c6
c2:ff:43:b7:4e:b7:56:3f:2b:d8:5d:b9:73:15:ca:81
f1:c3:78:2f:8d:4f:fd:e8:2d:6f:2f:2d:f6:b9:e1:a0
11:f2:56:18:02:5b:8e:07:da:19:43:c1:70:bc:7b:8b
82:2b:02:e2:71:6e:30:9b:18:8d:ed:1f:29:59:86:9d
81
Exponent (bits 24):
01:00:01
Extensions:
Basic Constraints (critical):
Certificate Authority (CA): TRUE
Key Purpose (not critical):
TLS WWW Client.
TLS WWW Server.
Ipsec IKE.
OCSP signing.
Code signing.
Time stamping.
Email protection.
Key Usage (critical):
Digital signature.
Key encipherment.
Data encipherment.
Certificate signing.
CRL signing.
Subject Key Identifier (not critical):
213b20bf44b3446fb14f6cf72b8c2c03a09e292e
Other Information:
Public Key ID:
sha1:213b20bf44b3446fb14f6cf72b8c2c03a09e292e
sha256:7f76aada143491a8ba0721509a3e49f9e72321ed880f7ee64b8e01172989b3d2
Public Key PIN:
pin-sha256:f3aq2hQ0kai6ByFQmj5J+ecjIe2ID37mS44BFymJs9I=

Reading

  • Review Solving the Bottom Turtle2

Question

  • Does the diagram on page 44 make sense to you for what you did with a certificate authority in this lab?

Answer

  • Yes it does, we had only setup a portion of this chain of trust, yet it got the idea across of whom we are referring to and how we build a certificate from that referral.

Assignment

Question

  • What is the significance of the permission settings that you saw on the generated public and private key pairs?

Answer

  • Only the owner has Read/Write permission to the Private key, whereas the public key – meant to be shared, is Readable by Group and Others.

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 8 Web Book ProLUG, 2025. ↩︎

  2. Solving the Bottom Turtle Web Book Spiffe.io,2020. ↩︎

ProLUG SEC Unit 7 đź”’

Intro đź‘‹

Monitoring systems and alerting when issues arise are critical responsibilities for system operators. Effective observability ensures that system health, performance, and security can be continuously assessed.1


Worksheet

Discussion Post 1

Intro to the scenario

Read about telemetry, logs, and traces23.

Question

  • How does the usage guidance of that blog align with your understanding of these three items?

Answer

Though the concepts involved in telemetry are really quite simple, they took me some time to internalize and fully understand. I can’t say it paralleled my own understanding as my understanding was very limited. Prior to the lectures, if I were to hear the word telemetry, I would think of non GPS tracking techniques or some sort of secret tracking by Palantir.

My simplified outline of these 3 things:

  • A metric represents a point in time measurement of a particular source
  • Logs are discrete and event triggered occurrences.
  • Traces follow a program’s flow and data progression.

Question

  • What other useful blogs or AI write-ups were you able to find?

Answer

Question

  • What is the usefulness of this in securing your system?

Answer

  • Securing a System is useful in many ways.
  • Prevents unwanted access.
  • Potentially mitigates Data Exfiltration.
  • Potentially mitigates unwanted Data Infiltration.
  • Prevents miss-use by users.
  • Can simply mitigate the overload of available resources.
  • Not only does the attack surface shrink when a system is properly secured, but the monitoring tasks reduce.
  • Secure systems are more predictable systems.

Discussion Post 2

Intro to the scenario

When we think of our systems, sometimes an airgapped system is simple to think about because everything is closed in. The idea of alerting or reporting is the opposite. We are trying to get the correct, timely, and important information out of the system when and where it is needed.

Read the summary at the top4.

Question

  • What is the litmus test for a page? (Sending something out of the system?)

Answer

  • The page must be pertaining to an imminent, actionable situation that must be addressed quickly.

Question

  • What is over-monitoring v. under-monitoring. Do you agree with the assessment of the paper? Why or why not, in your experience?

Answer

  • Over-monitoring can be compared to hyper-vigilance. Over time it works against you as fatigue or indifference sets in. Furthermore over-monitoring includes the transporting, receiving and dissemination of too much information, causing cognitive overload, leading to poor decision making. Furthermore, the additional information being broadcast leaves a system more susceptible/vulnerable from a security stand-point.

  • Under monitoring would be lack of contextual reporting, responsiveness and diligence needed in order to keep a system from going down.

  • From reading this article, it seems to me that one must turn monitoring into a spectrum of detail. Hypercritical indicators like uptime, load, capacity should be reported daily with a pre-determined baseline. Estimations can be made from this allowing for prediction. While major changes –those outside the predicted norm, could trigger alerts. Paging should be reserved for the utmost critical issues.

Question

  • What is cause-based v. symptom-based and where do they belong? Do you agree?

Answer

  • Cause based is the analysis/investigation of the root cause of a certain outcome. In the context of systems operating and security, it is finding the vulnerability, infiltration/exfilration point or cause of failure like hitting memory/cpu/disc space limitations.

  • Symptom based analysis would be to observe the effects of an unknown origins ie. systems going down, data loss etc… Bringing the system back up, or restoring data from backups does nothing to address the root cause, it only remediates the effects.


Definitions

  • Telemetry Automated collection of system metrics and status data.
  • Tracing Tracks the path and performance of requests across services.
  • Span A single unit of work in a trace, with start and end timestamps.
  • Label Key-value pair used to add metadata to traces or metrics.
  • Time Series Database (TSDB) A database optimized for storing data indexed by time.
  • Queue A data structure or service for holding and processing messages in order.
  • UCL/LCL Statistical limits used to detect anomalies in metrics over time.
  • Aggregation Combining multiple data points into a summarized form.
  • SLO Service Level Objective, a target performance metric.
  • SLA Service Level Agreement, a contractually agreed service standard.
  • SLI Service Level Indicator, a specific measurement of system performance.
  • Push Data sent actively from source to receiver.
  • Pull Data requested by the receiver from the source.
  • Alerting rules Conditions set to trigger alerts based on system metrics.
  • Alertmanager Tool for handling, deduplicating, and routing alerts.
  • Alert template Format used to display or notify alert information.
  • Routing Directing alerts to specific teams or destinations.
  • Throttling Limiting the number or rate of alerts to reduce noise.
  • Monitoring for defensive operations Watching systems for signs of attack or failure.
  • SIEM Centralized platform for analyzing security and event logs.
  • IDS Tool that detects suspicious or unauthorized activity.
  • IPS Tool that blocks or prevents detected threats in real time.

Lab đź§Ş

Fail2Ban Setup and Testing

Install Fail2Ban

  • Install Fail2Ban:
    apt install -y fail2ban

Verify Installation

  • Check the version of Fail2Ban:
    fail2ban-client --version

Configure SSHD Jail

  • Edit the jail configuration file:

    vi /etc/fail2ban/jail.conf
  • Uncomment [sshd] and add the following under that section:

    [sshd]
    enabled = true
    maxretry = 5
    findtime = 10
    bantime = 4h
  • Review the rest of the file and ensure there is no duplicate [sshd] section. Comment it out or remove it if found.

Explore Other Jails

  • Review configuration sections for Apache and NGINX to see other available jails.

Restart and Verify Fail2Ban

  • Restart the service:

    systemctl restart fail2ban
  • Check status:

    systemctl status fail2ban --no-pager

Test the SSH Ban

  • SSH into node01:

    ssh node01
  • Run a loop to simulate failed login attempts:

    for i in {1..6}; do ssh invaliduser@controlplane; done
  • Press Enter on each password prompt until Fail2Ban triggers.

  • Use Ctrl + C to exit the loop when connection attempts are blocked.

Check Ban Status

  • Return to controlplane.

  • View the logs:

    tail -20 /var/log/fail2ban.log
  • Check banned IPs:

    fail2ban-client get sshd banned

Question

  • Do you see the expected IP address in the ban list?
  • Why do you think that is?

Answer

  • Yes I was able to see it.

Unban the IP

  • Replace <IP> with the actual banned IP:
    fail2ban-client set sshd unbanip <IP>

Confirm Unban

  • SSH into node01:

    ssh node01
  • Try to reconnect to controlplane using the correct user:

    ssh root@controlplane

Question

  • Did the connection succeed?

Answer

  • Yes, I was able to reconnect

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 7 Web Book Source, 2025. ↩︎

  2. Observability Chapter Web Book Source, 2025. ↩︎

  3. Telemetry Web Book Grafana, 2025. ↩︎

  4. My Philosophy on Alerting Google Doc Rob Ewaschuk, 2014. ↩︎

ProLUG SEC Unit 8 đź”’

Intro đź‘‹

Configuration drift is the silent enemy of consistent, secure infrastructure. When systems slowly deviate from their intended state, whether that be through manual changes, failed updates, or misconfigured automation, security risks increase and reliability suffers.1


Worksheet

Discussion Post 1

Read about configuration management2

Questions

What overlap of terms and concepts do you see from this week’s meeting?

Answer

  • Lifecycle management and Change Control (Change Management).
  • Change Management is a system for ensuring process and product integrity.
  • Despite these controls, variation from the norm (configuration drift) is inevitable.
  • So we must invoke/involve controls in order to catch variation/drift.
  • In the case of systems, it is bot Misconfigured Systems and Misconfigured Users to induce variation/drift.

Question

What are some of the standards and guidelines organizations involved with configuration management?

Answer

  • Originally developed by the U.S. Department of Defense to ensure quality, reliability, and integrity in the manufacturing supply chain, configuration management principles were later adopted and expanded upon by standards bodies such as ANSI, ISO, and IEEE. These concepts have since evolved through industry-specific frameworks, including:

  • ITIL

  • ISO/IEC

  • NIST

  • IEEE

  • CERN

Question

Do you recognize them from other IT activities?

Answer

  • For sure.

  • Baselining Gathering telemetry from a system at its base config

  • Standards Developing a standard for configuration or procedure to ensure consistent and predictable output

  • Controls Controlling versions, changes, configurations

  • Automation Automatic and Repeatable tasks

  • Variation Departure from the standard

  • Remediation Reconciliation, Correction, Rebasing

Discussion Post 2

Review the SRE guide to treating configurations as code. Focus down on the “Practical Advice” section 3

Question

  • What are the best practices that you can use in your configuration management adherence?

Answer

  • Don’t Check in Secrets

  • Make it Hermetic

    • Apply the Rigor of Code
    • Golden Image
  • Make it Reproducible

    • Try to Implement a Software Bill of Materials (SBOM)
    • Patching (If warranted) records.
  • Make it Verifiable

    • Binary Provenance
    • Use Signed Code
    • Verify Artifacts, Not Just People
    • Verifiable Build Architectures

Question

  • What are the security threats and how can you mitigate them?

Answer

  • Supply Chain Attacks
  • Exposure of secrets
  • Non-hermeticity and Drift
  • Over-priveleging through automation
  • Inadequate Auditing and Change Control
  • Insecure Testing Environments
  • Artifact Poisoning

Question

  • Why might it be good to know this as you design a CMDB or CI/CD pipeline?

Answer

  • The Pipeline is a major target. If something were to be malicious injected, the problem could propagate to all target platforms/devices.
  • CMDB is a Source of Truth. A misconfiguration, bad record or malicious activity could invalidate hermeticity.
  • Secrets and Credentials flow through the Pipeline, a whole can of worms.

Definitions

  • System Lifecycle The full span of a system’s life: design, build, operate, maintain, and retire.
  • Configuration Drift The divergence of a system’s current state from its intended or documented configuration.
  • Change management activities Processes that control changes to systems to reduce errors and downtime.
  • CMDB (Configuration Management Database) A database tracking system components and their relationships.
  • CI (Configuration Item) Any component in the CMDB (e.g., server, software, network) being tracked and managed.
  • Baseline A known good configuration state used for comparison and control.
  • Build book A documented set of steps to initially install and configure a system.
  • Run book A manual or automated guide for maintaining or operating a system post-deployment.
  • Hashing The process of generating a fixed-size value from data to verify integrity.
  • md5sum Tool that calculates a 128-bit MD5 hash for checking file integrity.
  • sha<x>sum Tools (e.g., sha256sum) that generate SHA-family hashes for stronger integrity checks.
  • IaC (Infrastructure as Code) Managing infrastructure using versioned code instead of manual processes.
  • Orchestration Coordinating automated tasks across multiple systems or services.
  • Automation Replacing manual tasks with scripts or tools to increase speed and consistency.
  • AIDE (Advanced Intrusion Detection Environment) A file integrity checker that detects unauthorized changes.

Lab đź§Ş

STIG Viewer – Change Management

Question

  • How many STIGs relate to “change management” in RHEL 9?

Answer

  • 9 STIGs contain the phrase.

Question

  • What does a “robust change management process” imply?

Answer

  • Change control, peer review, versioning, testing, and approval are mandatory before config updates.

Question

  • Can one STIG enforce this?

Answer

  • No, it’s an org-wide practice beyond simple config toggles.

Question

  • What type of control is applied?

Answer

  • Technical preventative—mostly file ownership/permissions.

Question

  • Are they all the same?

Answer

  • Yes, the control type is consistent across them.

Monitoring Configuration Drift with AIDE

Question

  • What is /etc/aide/aide.conf.d/?

Answer

  • Contains rule files defining paths to hash and monitor.

Question

  • How many files are there?

Answer

  • 213 files.

Question

  • What does aide -v show?

Answer

  • Version 0.18.6

Question

  • What is AIDE?

Answer

  • File integrity checker using stored hashes in a database.

Question

  • What does /etc/cron.daily/dailyaidecheck do?

Answer

  • Runs dailyaidecheck via capsh if available, otherwise with bash.

Question

  • What does capsh do?

Answer

  • Launches processes with limited capabilities—safer than full root.

Question

  • What does aide -i do?

Answer

  • Initializes the DB. It took ~4m14s. User time was ~3m30s.

Question

  • Why track timing?

Answer

  • For planning and resource estimation during mass deployments.

Question

  • What’s in the output?

Answer

  • Hashes (MD5, SHA, etc.) and /var/lib/aide/aide.db.new.

Question

  • What should you study?

Answer

  • RMD160, TIGER, CRC32, HAVAL, WHIRLPOOL, GOST.

AIDE Test Run

Question

  • What’s the test procedure?

Answer

  • Create /root/prolug/test*, run aide check.

Question

  • Were files detected?

Answer

  • Yes, under “Added entries.”

Question

  • Runtime?

Answer

  • ~6m38s, user ~5m54s, sys ~8s.

Remediating Drift with Ansible

Question

  • What does the web env lab do?

Answer

  • Deploys 3 virtual hosts (dev, test, qa) on ports 808{0,1,2}.

Question

  • How do you test?

Answer

  • curl node01:808{0,1,2}

Question

  • What happened to 8081?

Answer

  • It failed initially—intentional drift.

Question

  • Does re-running the playbook fix it?

Answer

  • Yes, restores state without manual steps.

Question

  • Will that always work?

Answer

  • Yes, unless networking/firewall issues prevent access.

Question

  • Can this cause issues?

Answer

  • Yes, if configs were changed manually after deployment.

Question

  • Root cause: tech or ops?

Answer

  • Operational—teams must coordinate changes.

Challenge: Custom Reporting

Question

  • How would you verify stamp compliance?

Answer

  • Use Ansible facts and add deployment date as a custom variable.

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 8 Web Book ProLUG, 2025. ↩︎

  2. Configuration Management Wiki Wikipedia, 2025. ↩︎

  3. Building Secure and Reliable Systems Web Book Google, 2025. ↩︎

ProLUG SEC Unit 6 đź”’

Intro đź‘‹

Monitoring and parsing logs is essential to operational intelligence. Computers typically produce immense amounts of data—far more than a human can interpret in real time. To extract meaning from this data, we must intelligently filter event logs into clear, comprehensible, and actionable items.

Achieving this is easier said than done. This unit offers general advice on the art of making complex information comprehensible. 1


Worksheet

Discussion Post 1

Review chapter 15 of the SRE book:
2

There are 14 references at the end of the chapter. Follow them for more information. One of them by Julia Evans 3should be reviewed for question “c”.

Question

  • What are some concepts that are new to you?

Answer

  • Core dumps, Memory dumps, or Stack traces.

    I have heard the terms before and understand the concepts to a basic degree. I decided to do a bit of further reading to understand each of the dumps and traces so here is a gist.

    • A core dump is a snapshot of the processes state at the time of downing.
    • A Memory dump is a snapshot of the Random Access Memory (RAM) at the time of downing.
    • A Stack trace is the process of tracing function calls through the stack from end (error) to beginning (Call). The way I personally conceptualize this is through comparison to root cause analysis, something I am familiar with.
  • Host intrusion detection systems (HIDS) or Host Agents

A few ideas from the book2 “Modern (sometimes referred to as “next-gen”) host agents use innovative techniques aimed at detecting increasingly sophisticated threats. Some agents blend system and user behavior modeling, machine learning, and threat intelligence to identify previously unknown attacks.”

“Host agents always impact performance, and are often a source of friction between end users and IT teams. Generally speaking, the more data an agent can gather, the greater its performance impact may be because of deeper platform integration and more on-host processing.”

Question

  • There are 5 conclusions drawn, do you agree with them? Would you add or remove anything from the list?

Answer

  • To begin with, here are the conclusions drawn:
  1. “Debugging is an essential activity whereby systematic techniques—not guesswork—achieve results.”
  2. “Security investigations are different from debugging. They involve different people, tactics, and risks.”
  3. “Centralized logging is useful for debugging purposes, critical for investigations, and often useful for business analysis.”
  4. “Iterate by looking at some recent investigations and asking yourself what information would have helped you debug an issue or investigate a concern.”
  5. “Design for safety. You need logs. Debuggers need access to systems and stored data. However, as the amount of data you store increases, both logs and debugging endpoints can become targets for adversaries.”

Firstly, I would like to preface this answer with a disclaimer. I lack the competency to critisize and/or disect O’Relly’s book. With that out of the way. I am going to target the first point.

My only criticism here is that the point is very broad in scope as compared to the more granular and topics specific to this book/chapter.

Question

  • In Julia Evan’s debugging blog, which shows that debugging is just another form of troubleshooting, what useful things do you learn about the relationship between these topics?

Answer

  • Both debugging and troubleshooting involve:

  • Proceduralization: If a clear procedure doesn’t exist, begin documenting and formalizing the process into a repeatable method.

  • Humility: Acknowledge that you might be the cause of the problem. This is especially important in development.

  • Methodical Experimentation: Form a hypothesis, then devise a controlled method to test it—use unit tests in development, or targeted scripts and commands when debugging.

  • One Step at a Time: Tackle problems incrementally—“eat the elephant one bite at a time.”

  • Strong Foundations: Write debuggable code and build robust systems. A good foundation makes issues easier to isolate.

  • More Is Better: Verbose error messages provide more clues—enable detailed output when possible.

Question

  • Are there any techniques you already do that this helps solidify for you?

Answer

Yes, I try to create excellent documentation with respect for my future self or others I may need to share it with. This involves numbered procedural steps with inputs and outputs, if that is the nature of the work. Otherwise, I write in a general manner that is legible to others.

Discussion Post 2

Read Monitoring Distributed Systems 4

Question

  • What interesting or new things do you learn in this reading? What may you want to know more about?

Answer

  • Interesting Concept:

One of the general themes I gathered from this article is low cognitive overhead. It’s a concept I’m very familiar with from accessibility-focused design. Too much information overwhelms our ability to observe, absorb, and decide effectively.

For example, public signage must be simple, legible, and self-descriptive through clear graphic composition—guiding the eye where to look first and in which direction to proceed. This closely parallels the need for simplicity in monitoring and alerting systems. When such systems become overly complex, they can lead to misinterpretation, miscommunication, and fatigue due to information overload.

Information must be derived and presented in a way that is easily consumable, where errors are unmistakable—without exhausting the viewer.

New concepts

  • White box monitoring systems vs. Black box monitoring systems.
  • Conducting ad hoc retrospective analysis (ie. Debugging)
  • (4 Golden signals) Latency, Traffic, Saturation, Errors
    • This one in particular relates strongly to the USE acronym I had recently picked up from Het, Utilization, Saturation, Errors

Question

  • What are the “4 golden signals”?

Answer

  1. Latency
  2. Traffic
  3. Saturation
  4. Errors

Question

  • After reading these, why is immutability so important to logging?

Answer

  • Tamper Resistance: Immutable logs cannot be altered or deleted without detection, which helps prevent covering up malicious activity or mistakes.
  • Auditability: Logs serve as historical records. If they can be changed, audits and investigations lose their value.
  • Debugging Integrity: Developers and operators rely on logs to trace errors. Mutable logs can introduce false positives or hide root causes.
  • Regulatory Compliance: Standards like HIPAA, PCI-DSS, and GDPR often require tamper-evident or immutable log storage.
  • Forensic Value: In incident response, immutable logs serve as trustworthy evidence for timelines and breach analysis.

Question

  • What do you think the other required items are for logging to be effective?

Answer

In order to be effective, log must be:

  • Trustworthy: Logs should be immutable.
  • Time-stamped: Every entry needs a synced timestamp.
  • Clear levels: Use INFO, ERROR, DEBUG, etc., to show importance.
  • Structured: Format logs so machines and humans can read them.
  • Context-rich: Include request IDs, user info, IPs—anything that helps trace the story.
  • Centralized: Gather logs in one place for easy searching and alerting.
  • Searchable: You should be able to find issues fast with good queries.
  • Safe: Control who can see logs—some contain sensitive info.
  • Durable: Logs shouldn’t disappear in a crash—use backups and redundancy.
  • Noise-controlled: Avoid flooding—rotate logs and cap log rates.

Definitions

  • Application Logs from software applications showing events or errors.
  • Host Logs from the operating system, like kernel or authentication events.
  • Network Logs capturing traffic, connections, and protocol activity.
  • DB Logs from databases showing queries, errors, and access.
  • Immutable Logs that cannot be changed once written.
  • RFC 3164 BSD Syslog Older syslog format with simple priority and message structure.
  • RFC 5424 IETF Syslog Modern syslog format with structured data and better timestamps.
  • Systemd Journal Binary log format used by systemd with metadata support.
  • Log rotation Archiving or deleting old logs to manage disk space.
  • Rsyslog Advanced syslog daemon for filtering, formatting, and remote logging.
  • Log aggregation Collecting logs from multiple sources for analysis.
  • ELK Stack using Elasticsearch, Logstash, and Kibana to manage logs.
  • Splunk Tool for searching and analyzing machine-generated logs.
  • Graylog Open-source platform for central log collection and analysis.
  • Loki Log system that indexes by labels, optimized for Grafana.
  • SIEM Tools that collect and analyze security data for threat detection.

Lab đź§Ş

RSYSLOG

Reliable System and Kernel Logging System

Basic Steps:

  1. Ensure Rsyslog is installed and running on both the control-plane and target node.
  2. Configure sending of logs over UDP Port.
  3. Editing Rsyslog config to split out logs.

Question

Why do we split out the logs in this lab? Why don’t we just aggregate them to one place?

Answer

  • We are aggregating the logs.
  • So that we can tell where the logs are coming from.
  • Each node is getting its own directory in `/var/log.

Question

  • What do we split them out by?

Answer

  • We split them b y hostname.

Question

  • How does that template configuration work?

Answer

  • It will log to a specific directory named after the target hostname in /var/log.

Question

  • Are we securing this communication in any way, or do we still need to configure that?

Answer

  • No we are not securing this communication, yes it needs further configuring.

Lab

Question

  • Does the lab work correctly, and do you understand the data flow?

  • Yes

  1. Promtail (collects)
  2. Loki (stores)
  3. Grafana (visualizes)

loki-write.py

Question

  • Can you see it in your Grafana?

Answer

  • Yes Scott is too awesome!

Question

  • Can you modify the file loki-write.py to say something related to your name?

Answer

msg = ‘On server {host} detected error - Treasure Wuz Here’.format(host=host)

Question

  • Can you modify that to see the actual entires?

Answer

  • Yes

Lab

Complete the killercoda lab found here: https://killercoda.com/het-tanis/course/Linux-Labs/108-kafka-to-loki-logging

Question

  • Did you get it all to work?

Answer

  • Yes

Question

  • Does the flow make sense in the context of this diagram?

Answer

  • Yes
  • Uses kcat to write out to Kafka
  • Using promtail to receive the messages from kafka
  • Promtail pushes to loki
  • Displayed by grafana

Question

Can you find any configurations or blogs that describe why you might want to use this architecture or how it has been used in the industry?

Answer

  • Kafka interconnects log Producers and log Ingesters. Kafka can ingest logs from all types of sources and analyze in real time.
  • Apache Kafka, according to Apache, is the most popular open-source stream-processing software.5

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 6 Worksheet Web Book ProLUG, 2025. ↩︎

  2. Building Secure and Reliable Systems Web Book Google, 2025. ↩︎ ↩︎

  3. How to Debug Blog Julia Evans, 2019. ↩︎

  4. SRE Handbook Web Book Google, 2025. ↩︎

  5. Powered by Kafka Website Apache, 2025. ↩︎

ProLUG SEC Unit 5 đź”’

Intro đź‘‹

Repositories and Patching is the general theme of this unit. We dive into creating internally audited repositories for safe enterprise operation. This configuration allows for greater security scrutiny and compatibility testing before schedule patching takes place. For example, a company would like to skip every other version of a package in order to reduce update cadence, giving more time for assessment, correction and troubleshooting of internal software. Much like any enterprise decision regarding cost and effort and analysis must take place. 1


Worksheet

Discussion Post 1

Review the rocky documentation on Software management in Linux.2

Question

  • What do you already understand about the process?

Answer

  • I had gained a decent understanding of the package management systems of both RHEL and DEBIAN based distros through both studying for the LPIC 1 and completing the ELAC course through this group. From this I had learned about versioning and dependency management including modules. The differences between RPM, YUM, and DNF and how these evolutions of package management came into being.

Question

  • What new things did you learn or pick up?

Answer

    • I did not understand the depth to which RPM packages tracked package data.
  • I did not know that headers could be edited in the package in order to add custom labeling.
  • I had basic awareness and high level understanding of internal package management. Prior to our lecture, I had not seen an internal package server/relay be setup/configured.
  • I did not know much about EPEL beyond having to call it in the CLI for additional packages outside of DNF.

Question

  • What are the DNF plugins? What is the use of the versionlock plugin?

Answer

  • DNF plugins are external modules that extend the functionality of DNF. I have some experience activating the COPR repo using DNF plugins. Furthermore, versionlock is a specific plugin that allows for an admin/engineer/dev to lock a particular package to a specified version so that it is not mistakenly changed/overwritten. This is typically done in software development in my experience, with software development many dependencies might be needed. Breaking updates were common place, so most modern software projects contain a lock file that indicates what specific dependency version must be used in order to build or interpret the project.

Question

  • What is an EPEL? Why do you need to consider this when using one?

Answer

  • EPEL stands for Extra Packages for Enterprise Linux. These packages exist outside of the core enterprise offering and are therefore potentially issue causing. Unlike core packages, these extra packages could introduce possible incompatibility issues, resulting in rejection by endorsed support specialists.

Discussion Post 2

Do a google search for “patching enterprise Linux”3

Question

  • What blogs (or AI) do you find that enumerates a list of steps or checklists to consider?

Answer

  • I used to sources that I had found to be concise
    • RedHat Documentation4
    • Chat*ippity

Question

  • After looking at that, how does patching a fleet of systems in the enterprise differ from pushing “update now” on your local desktop?

Answer

  • Patching a fleet of systems involves the systematic updating of installed software to fix security vulnerabilities, improve stability, and introduce minor enhancements. The process is governed by organizational policies to ensure uptime and compliance.

Because changes affect many systems simultaneously, patching acts as an amplifier of problems if not handled carefully. Therefore, enterprise patching must be strategic, managed, and auditable.

In contrast, running updates on a personal system is typically an automated, low-risk operation, with little concern for version conflicts or trust in the source. Additionally, modern filesystems like ZFS and Btrfs provide the ability to quickly roll back changes if something fails.

Question

  • What seems to be the major considerations? What seems to be the major roadblocks?

Answer

  • Major Considerations

    • Uptime and Service Continuity
    • Security and Compliance
    • Testing and Validation
    • Dependency Management
    • Rollback and Recovery Planning
    • Orchestration and Scalability
  • Major Roadblocks

    • Legacy Systems
    • Change Resistance
    • Incomplete Asset Inventory
    • Tight Maintenance Windows
    • Patch Quality and vendor Bugs
    • Complex Dependencies and Integration Points
    • Resource Constraints

Definitions

  • Patching The process of applying updates to fix bugs, improve security, or enhance performance.
  • Repos (Repositories) Remote or local collections of software packages used by package managers.
  • Software Applications or tools installed on a system to perform specific functions.
  • EPEL (Extra Packages for Enterprise Linux) A Fedora project providing additional packages for RHEL-based systems.
  • BaseOS The core operating system components in RHEL/Rocky, including the kernel and essential services.
  • AppStream A modular repository in RHEL/Rocky that provides applications and tools in versioned streams.
  • httpd The Apache HTTP Server package available via repos for web serving.
  • patching A type of update package that modifies existing software without replacing the whole binary.
  • GPG Key Used to verify the integrity and authenticity of packages in a repo.
  • DNF/YUM Package managers in RHEL-based systems used to install, update, and manage software packages.

Lab đź§Ş

Apache STIGs Review

  1. Look at the 4 STIGs for “tls”
  • What file is fixed for all of them to be remediated?
# Install httpd on your Rocky server
systemctl stop wwclient
dnf install -y httpd
systemctl start httpd
  1. Check STIG V-214234

Question

  • What is the problem?

Answer

  • Event logging can fail.

Question

  • What is the fix?

Answer

  • This can be fixed by implementing failure alerts.

Question

  • What type of control is being implemented?

Answer

  • This type of control would be a Detective Control

Question

  • Is it set properly on your system?

Answer

  • No, this is not setup by default, it must be implemented after installation.

Check STIG V-214248

Question

  • What is the problem?

Answer

  • By default, sensitive information including security controls may be available to all users because privelaged user access controls have not been implemented.

Question

  • What is the fix?

Answer

  • Develop roles for privelaged users and define access policies.

Question

  • What type of control is being implemented?

Answer

  • This is a preventative type control.

Question

  • Is it set properly on your system?

Answer

  • No, not by default. Of course super user has special priveledges. However, beyond that there are no other tiers of access.

Question

  • How do you think SELINUX will help implement this control in an enforcing state? Or will it not affect it?

Answer

  • SELINUX allows for strong group creation and control. So it would help batch users and apply granular control mechanisms.

Building repos

# Start out by removing all your active repos
cd /etc/yum.repos.d
mkdir old_archive
mv *.repo old_archive
dnf repolist
# Mount the local repository and make a local repo
mount -o loop /lab_work/repos_and_patching/Rocky-9.5-x86_64-dvd.iso /mnt
df -h #should see the mount point
ls -l /mnt
touch /etc/yum.repos.d/rocky9.repo
vi /etc/yum.repos.d/rocky9.repo
[BaseOS]
name=BaseOS Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
baseurl=file:///mnt/BaseOS/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[AppStream]
name=AppStream Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
baseurl=file:///mnt/AppStream/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
#Save with esc :wq or “shift + ZZ”

Question

  • Do the paths you’re using here make sense to you based off what you saw with the ls -l? Why or why not?

Answer

  • TODO
chmod 644 /etc/yum.repos.d/rocky9.repo
dnf clean all
# Test the local repository
dnf repolist
dnf --disablerepo="*" --enablerepo="AppStream" list available
Approximately how many are available?
dnf --disablerepo="*" --enablerepo="AppStream" list available | nl
dnf --disablerepo="*" --enablerepo="AppStream" list available | nl | head
dnf --disablerepo="*" --enablerepo="BaseOS" list available
Approximately how many are available?
dnf --disablerepo="*" --enablerepo="BaseOS" list available | nl
dnf --disablerepo="*" --enablerepo="BaseOS" list available | nl | head
# Try to install something
dnf --disablerepo="*" --enablerepo="BaseOS AppStream" install gimp
hit “n”

Question

  • How many packages does it want to install?

Answer

  • TODO

Question

How can you tell they’re from different repos?

Answer

  • TODO
# Share out the local repository for your internal systems (tested on just this one system)
rpm -qa | grep -i httpd
systemctl status httpd
ss -ntulp | grep 80
lsof -i :80
cd /etc/httpd/conf.d
vi repos.conf
<Directory "/mnt">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
Alias /repo /mnt
<Location /repo>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Location>
systemctl restart httpd
vi /etc/yum.repos.d/rocky9.repo
###USE YOUR HAMMER MACHINE IN BASEURL###
[BaseOS]
name=BaseOS Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
#baseurl=file:///mnt/BaseOS/
baseurl=http://hammer25/repo/BaseOS/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[AppStream]
name=AppStream Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
#baseurl=file:///mnt/AppStream/
baseurl=http://hammer25/repo/AppStream/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Question

  • Do the paths you’ve modified at baseurl make sense to you? If not, what do you need to better understand?
dnf clean all
dnf repolist
# Try to install something
dnf --disablerepo="*" --enablerepo="BaseOS AppStream" install gimp

Digging Deeper

Question

  • You’ve set up a local repository and you’ve shared that repo out to other systems that might want to connect. Why might you need this if you’re going to fully air-gap systems? Is it still necessary even if your enterprise patching solution is well designed? Why or why not?

Answer

  • We need a unified checkpoint that ensures secure conformity of a package before patching air-gapped systems. Air-gapped systems are not eternally disconnected, they can be connected to other systems in a highly controlled manner.

Question

  • Can you add the Mellanox ISO that is included in the /lab_work/repos_and_patching section to be a repository that your systems can access? If you have trouble, troubleshoot and ask the group for help.

Answer

  • Yes you can, it must be given a special header and be registered aka. Packaged into the local repo in order for other package management systems to see it.

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 5 Web Book ProLUG, 2025. ↩︎

  2. Rocky Documentation: Software Management Web Book Rocky Docs, 2025. ↩︎

  3. Google Search Engine Web Search Engine, 2025. ↩︎

  4. Epel Documentation Web Docs IBM, 2025. ↩︎

ProLUG SEC Unit 4 đź”’

Intro đź‘‹

Bastions and airgaps are strategies for controlling how systems connect—or don’t connect—to the outside world.1


Worksheet

Discussion Post 1

https://aws.amazon.com/search/?searchQuery=air+gapped#facet_type=blogs&page=1

https://aws.amazon.com/blogs/security/tag/bastion-host/

  • Or find some on your own about air-gapped systems.

Question

  • What seems to be the theme of air-gapped systems?

Answer

Air gapped systems are highly controlled and isolated systems. The degree of isolation directly correlates to the level of operational burden as modern productive systems are typically highly connected to either LANs and/or WANs.

  • Blocking/Limiting/Bottlenecking Network Traffic
  • Limiting Services to Bare Essentials
  • Mitigating Data Egress
  • Quardening off un-expected behavior
  • Logging use events

Question

  • What seems to be their purpose?

Answer

  • To limit attack surface, mitigate malicious access and/or data infiltration/exfiltraion

Question

  • If you use google, or an AI, what are some of the common themes that come up when asked about air-gapped or bastion systems?

Answer

  • Common Themes in Air-Gapped Systems

    • Data Transfer Procedures
    • Patch Management & Updates
    • Logging and Auditing
    • Threat Models
    • Authentication & Access
    • Compliance & Certification
    • Operational Burden
  • Common Themes in Bastion Hosts

    • Network Segmentation
    • Hardened OS Configuration
    • Jump Host Architecture
    • Access Control & MFA
    • Monitoring and Alerting
    • Change Management
  • Shared Themes

    • Both require strict access control
    • Emphasis on tamper resistance and detection
    • Tradeoffs between security vs. usability
    • Often part of zero-trust or defense-in-depth architectures

Discussion Post 2

Question

Do a Google or AI search of topics around jailing a user or processes in Linux.

Answer

User Jailing Techniques

  • chroot
  • Namespaces
  • Control groups (cgroups)
  • Seccomp
  • AppArmor / SELinux

Container and Jail Environments

  • LXC
  • Docker / Podman
  • Firejail
  • Bubblewrap (bwrap) Flatpak unpriveledged namespaces

Use Cases

  • Jailed SSH users: Using chroot in sshd_config to restrict access.
  • systemd-nspawn: Lightweight containers for sandboxed environments.
  • Flatpak / Snap: Sandboxed app delivery systems for desktop applications.

Related Tools & Commands

  • chroot, unshare, setfacl, auditd
  • firejail, bwrap, systemd-nspawn
  • docker, podman, lxc-start

Question

Can you enumerate the methods of jailing users?

Answer

Yes there are 5 possible avenues that I know of.

Question

Can you think of when you’ve been jailed as a Linux user? If not, can you think of the useful ways to use a jail?

Answer

No I have not experienced being jailed as a user. However, if I could think of some use-cases, perhaps one would be as a honeypot for observability. Another usecase I think could work would be to trap crawlers/bots.


Definitions

  • Air-gapped Air gapped means physically isolated from unsecured networks.
  • Bastion A bastion is a secure gateway between a trusted and untrusted network.
  • Jailed process A jailed process is restricted to a limited portion of the filesystem.
  • Isolation Isolation separates processes or systems to limit access and interaction.
  • Ingress The intake of data into a system.
  • Egress In the context of systems, having the ability.
  • Exfiltration When a bad actor ro program is able to extracted data from a system.
  • Cgroups Cgroups limit and monitor resource usage of Linux processes.
  • Namespaces isolate system resources for process groups.
  • Mount restricts filesystem views per process group.
  • PID isolates process ID numbers between groups.
  • IPC isolates inter-process communication resources.
  • UTS allows separate host and domain names.

Lab 🧪🥼

process of chroot jail build

    1. Create a chroot in /var
mkdir /var/chroot
    1. Copy in core Binaries from the system into chroot bin,lib64,dev,etc,home,usr/bin,lib/x86_64-linux-gnu

Question

What seems to be the theme of air-gapped systems?

Answer

  • Disconnected them from regular operational activities.

Question

What seems to be their purpose?

Answer

  • Reduce or eliminate the possibility of infiltration and exfiltrion.

Question

hat are some of the common themes that come up when asked about air-gapped or bastion systems?

Air Gapped

  • Isolation
  • Threat Mitigation
  • Data Transfer Control
  • Threat Mitigation
  • Update Challenges
  • Insider Threats
  • Bridging Attacks
  • Regulatory Compliance

Bastion Hosts

  • Single Point of Entry
  • Heavily Monitored
  • Hardened Configuration
  • Authentication Hub
  • Session Recording
  • Access Segregation
  • Zero Trust Integration
  • Threat Containment

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 4 Web Book ProLUG, 2025. ↩︎

ProLUG SEC Unit 3 đź”’

Intro đź‘‹

Understanding and implementing network standards and compliance measures can make security controls of critical importance very effective.1


Worksheet

Discussion Post 1

There are 16 Stigs that involve PAM for RHEL 92.

Question

  • What are the mechanisms and how do they affect PAM functionality?

Answer

Hardening Defaults

STIGs replace permissive PAM modules with stricter ones. 2 Categories/Areas are covered in regards to Stig’ing PAMs

  1. Lockout Policies that effect login frequency and failure.
  2. Password Strength Enforcement that effects password complexity and re-use.

Review /etc/pam.d/sshd on a Linux system.

Question

  • What is happening in that file relative to these functionalities?

Answer

  • This file specifies the PAM module control flags that sshd uses during authentication.

Question

  • What are the common PAM modules?

Answer

  • pam_sepermit.so, pam_nologin.so, apassword-auth and postlogin.

Question

  • Look for a blog post or article about PAM that discusses real world application. ost it here and give us a quick synopsis.

Answer

https://www.redhat.com/en/blog/pluggable-authentication-modules-pam?utm_source=chatgpt.com

Synopsis:

PAM are a modular and flexible framework for integrating authentication methods into applications. By seperating / abstracting authentication mechanisms from application code, PAM allows admins to manage authentication policies centrally. PAM also allows from customized authentication processes (Security through obscurity)

Discussion Post 2

Intro to the scenario

Read about active directory (or LDAP) configurations of Linux via sssd3 👍

Question

  • Why do we not want to just use local authentication in Linux? Or really any system?

Answer

  • Local authentication presents several problems. Firstly, there is no federated access, so there is fragmentation of systems. Secondly, scalability is an issue as each local system manages that local system’s users, requiring individual account provisioning and password management. Thirdly, it complicates auditing and compliance, since there is no centralized logging or consistent policy enforcement. Additionally, stale or orphaned accounts can accumulate unnoticed, increasing security risks. Finally, it prevents the implementation of modern security practices such as single sign-on (SSO), multi-factor authentication (MFA), and role-based access control across a distributed environment.

Question

  • There are 4 SSSD STIGS.

Response

Vuln ID 258122

Enforce Smart Card Authentication – Require certificate-based smart card login to implement multi-factor authentication and enhance access security.

Vuln ID 248131

Validate Certificate Chains – Ensure that certificates used for PKI-based authentication are properly validated by building a complete certification path to a trusted root.

Vuln ID 258132

Associate Certificates with User Accounts – Confirm that every authentication certificate is explicitly mapped to a valid user account to maintain identity integrity.

Vuln ID 258133

Restrict Credential Caching Duration – Limit the validity period of cached authentication credentials to a maximum of 24 hours to reduce risk in the event of compromise.


Definitions

  • PAM Pluggable Authentication Modules provide a flexible mechanism for authenticating users on Unix-like systems.
  • AD Active Directory is Microsoft’s centralized directory service for authentication, authorization, and resource management.
  • LDAP Lightweight Directory Access Protocol is an open, vendor-neutral protocol for accessing and maintaining distributed directory information services.
  • sssd System Security Services Daemon provides access to remote identity and authentication providers like LDAP or Kerberos.
  • oddjob A D-Bus service used to perform privileged tasks on behalf of unprivileged users, often for domain enrollment or home directory creation.
  • krb5 Kerberos 5 is a network authentication protocol that uses tickets for securely proving identity over untrusted networks.
  • realm/realmd A tool that simplifies joining and managing a system in a domain like Active Directory or IPA using standard services.
  • wheel (system group in RHEL):** A special administrative group whose members are allowed to execute privileged commands using sudo.

Lab

Examine STIG V-257986

Question

  • What is the problem?

Answer

  • RHEL 9 needs PAM enabled for SSHD

Question

  • What is the fix?

Answer

  • Enabling UsePAM in /etc/ssh/sshd/config

Question

  • What type of control is being implemented?

Answer

  • A Technical Preventative control

Question

  • Is it set properly on your system?

Answer

  • Yes it is
grep -i pam /etc/ssh/sshd_config

Question

  • Can you remediate this finding?

Check and remediate STIG V-258055

Questions

  • What is the problem?
  • What is the fix?
  • What type of control is being implemented?
  • Are there any major implications to think about with this change on your system? Why or why not?
  • Is it set properly on your system?
  • How would you go about remediating this on your system?

Answers

  • After 3 Unsuccessful root login attempts the root is locked.
  • Enable faillock in authselect + even_deny_root
  • Technical preventative
  • Yes, Anyone can be locked out including root
  • No, it is commented out by default for good reason
  • I would not enable even_deny_root

Check and remediate STIG V-258098

Questions

  • What is the problem?
  • What is the fix?
  • What type of control is being implemented?
  • Is it set properly on your system?

Answers

  • Password complexity module pwquality must be enabled in system-auth
  • Check `/etc/pam.d/system-auth and see if the line exists or not
  • Technical preventative control
  • Yes it is properly implemented

Filter STIGS by “password complexity”

Questions

  • How many are there?
  • What are the password complexity rules?
  • Are there any you haven’t seen before?

Answers

  • 14 STIGS related to Password complexity
  • The somewhat standard 4 char class (One Upper, One Lower, One Special) and 15 Char total minimum. Max repeated char class is 4 and Max repeat char 3.
  • Yes the Max repeat stuff.

OpenLDAP Setup

You will likely not build an LDAP server in a real world environment. We are doing it for understanding and ability to complete the lab. In a normal corporate environment this is likely Active Directory.

To simplify some of the typing in this lab, there is a file located at /lab_work/identity_and_access_management.tar.gz that you can pull down to your system with the correct .ldif files.

[root@hammer1 ~]# cp /lab_work/identity_and_access_management.tar.gz .
[root@hammer1 ~]# tar -xzvf identity_and_access_management.tar 
1. Stop the warewulf client
[root@hammer1 ~]# systemctl stop wwclient
2. Edit your /etc/hosts file

Look for and edit the line that has your current server

[root@hammer1 ~]# vi /etc/hosts

Entry for hammer1 for example:

192.168.200.151 hammer1 hammer1-default ldap.prolug.lan ldap
3. Setup dnf repo
[root@hammer1 ~]# dnf config-manager --set-enabled plus
[root@hammer1 ~]# dnf repolist
[root@hammer1 ~]# dnf -y install openldap-servers openldap-clients openldap
4. Start slapd systemctl
[root@hammer1 ~]# systemctl start slapd
[root@hammer1 ~]# ss -ntulp | grep slapd
5. Allow ldap through the firewall
[root@hammer1 ~]# firewall-cmd --add-service={ldap,ldaps} --permanent
[root@hammer1 ~]# firewall-cmd --reload
[root@hammer1 ~]# firewall-cmd --list-all
6. Generate a password (Our example uses testpassword) This will return a salted SSHA password. Save this password and stalted hash for later input
[root@hammer1 ~]# slappasswd

Output:

New password: Re-enter new password: {SSHA}wpRvODvIC/EPYf2GqHUlQMDdsFIW5yig

7. Change the password
[root@hammer1 ~]# vi changerootpass.ldif
dn: olcDatabase={0}config,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}vKobSZO1HDGxp2OElzli/xfAzY4jSDMZ
[root@hammer1 ~]# ldapadd -Y EXTERNAL -H ldapi:/// -f changerootpass.ldif 

Output:

SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={0}config,cn=config"

8. Generate basic schemas
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
9. Set up the domain (USE THE PASSWORD YOU GENERATED EARLIER)
[root@hammer1 ~]# vi setdomain.ldif
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=prolug,dc=lan" read by * none

dn: olcDatabase={2}mdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=prolug,dc=lan

dn: olcDatabase={2}mdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=prolug,dc=lan

dn: olcDatabase={2}mdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}s4x6uAxcAPZN/4e3pGnU7UEIiADY0/Ob

dn: olcDatabase={2}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=prolug,dc=lan" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=prolug,dc=lan" write by * read
10. Run it
[root@hammer1 ~]# ldapmodify -Y EXTERNAL -H ldapi:/// -f setdomain.ldif

Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={1}monitor,cn=config
modifying entry "olcDatabase={2}mdb,cn=config
modifying entry "olcDatabase={2}mdb,cn=config
modifying entry "olcDatabase={2}mdb,cn=config
modifying entry "olcDatabase={2}mdb,cn=config

11. Search and verify the domain is working.
[root@hammer1 ~]# ldapsearch -H ldap:// -x -s base -b "" -LLL "namingContexts"

Output:

dn: namingContexts: dc=prolug,dc=lan

12. Add the base group and organization.
[root@hammer1 ~]# vi addou.ldif
dn: dc=prolug,dc=lan
objectClass: top
objectClass: dcObject
objectclass: organization
o: My prolug Organisation
dc: prolug

dn: cn=Manager,dc=prolug,dc=lan
objectClass: organizationalRole
cn: Manager
description: OpenLDAP Manager

dn: ou=People,dc=prolug,dc=lan
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=prolug,dc=lan
objectClass: organizationalUnit
ou: Group
[root@hammer1 ~]# ldapadd -x -D cn=Manager,dc=prolug,dc=lan -W -f addou.ldif
13. Verifying
[root@hammer1 ~]# ldapsearch -H ldap:// -x -s base -b "" -LLL "+"  
[root@hammer1 ~]# ldapsearch -x -b "dc=prolug,dc=lan" ou
14. Add a user

Generate a password (use testuser1234)

[root@hammer1 ~]# slappasswd 
[root@hammer1 ~]# vi adduser.ldif
dn: uid=testuser,ou=People,dc=prolug,dc=lan
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
cn: testuser
sn: temp
userPassword: {SSHA}yb6e0ICSdlZaMef3zizvysEzXRGozQOK
loginShell: /bin/bash
uidNumber: 15000
gidNumber: 15000
homeDirectory: /home/testuser
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0

dn: cn=testuser,ou=Group,dc=prolug,dc=lan
objectClass: posixGroup
cn: testuser
gidNumber: 15000
memberUid: testuser
[root@hammer1 ~]# ldapadd -x -D cn=Manager,dc=prolug,dc=lan -W -f adduser.ldif
16. Verify that your user is in the system.
[root@hammer1 ~]# ldapsearch -x -b "ou=People,dc=prolug,dc=lan"
17. Secure the system with TLS (accept all defaults)
[root@hammer1 ~]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/pki/tls/ldapserver.key -out /etc/pki/tls/ldapserver.crt
[root@hammer1 ~]# chown ldap:ldap /etc/pki/tls/{ldapserver.crt,ldapserver.key}
[root@hammer1 ~]# ls -l /etc/pki/tls/ldap*

Output:

-rw-r--r--. 1 ldap ldap 1224 Apr 12 18:23 /etc/pki/tls/ldapserver.crt -rw-------. 1 ldap ldap 1704 Apr 12 18:22 /etc/pki/tls/ldapserver.key

[root@hammer1 ~]# vi tls.ldif
dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/pki/tls/ldapserver.crt

add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/pki/tls/ldapserver.key

add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/pki/tls/ldapserver.crt
[root@hammer1 ~]# ldapadd -Y EXTERNAL -H ldapi:/// -f tls.ldif
18. Fix the /etc/openldap/ldap.conf to allow for certs
[root@hammer1 ~]# vi /etc/openldap/ldap.conf
#
# LDAP Defaults
#

# See ldap.conf(5) for details
# This file should be world readable but not world writable.

#BASE dc=example,dc=com
#URI ldap://ldap.example.com ldap://ldap-master.example.com:666

#SIZELIMIT 12
#TIMELIMIT 15
#DEREF never

# When no CA certificates are specified the Shared System Certificates
# are in use. In order to have these available along with the ones specified # by TLS_CACERTDIR one has to include them explicitly:

TLS_CACERT /etc/pki/tls/ldapserver.crt
TLS_REQCERT never

# System-wide Crypto Policies provide up to date cipher suite which should
# be used unless one needs a finer grinded selection of ciphers. Hence, the
# PROFILE=SYSTEM value represents the default behavior which is in place
# when no explicit setting is used. (see openssl-ciphers(1) for more info)
#TLS_CIPHER_SUITE PROFILE=SYSTEM

# Turning this off breaks GSSAPI used with krb5 when rdns = false
SASL_NOCANON on
[root@hammer1 ~]# systemctl restart slapd

SSSD Configuration and Realmd join to LDAP

SSSD can connect a server to a trusted LDAP system and authenticate users for access to local resources. You will likely do this during your career and it is a valuable skill to work with.

1. Install sssd, configure, and validate that the user is seen by the system
[root@hammer1 ~]# dnf install openldap-clients sssd sssd-ldap oddjob-mkhomedir authselect
[root@hammer1 ~]# authselect select sssd with-mkhomedir --force
[root@hammer1 ~]# systemctl enable --now oddjobd.service
[root@hammer1 ~]# systemctl status oddjobd.service
2. Uncomment and fix the lines in /etc/openldap/ldap.conf
[root@hammer1 ~]# vi /etc/openldap/ldap.conf

Output:

BASE dc=prolug,dc=lan URI ldap://ldap.ldap.lan/

3. Edit the sssd.conf file
[root@hammer1 ~]# vi /etc/sssd/sssd.conf
[domain/default]
id_provider = ldap
autofs_provider = ldap
auth_provider = ldap
chpass_provider = ldap
ldap_uri = ldap://ldap.prolug.lan/
ldap_search_base = dc=prolug,dc=lan
#ldap_id_use_start_tls = True
#ldap_tls_cacertdir = /etc/openldap/certs
cache_credentials = True
#ldap_tls_reqcert = allow

[sssd]
services = nss, pam, autofs
domains = default

[nss]
homedir_substring = /home
[root@hammer1 ~]# chmod 0600 /etc/sssd/sssd.conf
[root@hammer1 ~]# systemctl start sssd
[root@hammer1 ~]# systemctl status sssd

4. Validate that the user can be seen

[root@hammer1 ~]# id testuser

Output:

uid=15000(testuser) gid=15000 groups=15000

Please reboot the the lab machine when done.

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis



  1. Professional Linux User Group Security Engineering Unit 3 Web Book ProLUG, 2025. ↩︎

  2. Stigs that involve PAM for RHEL 9 [Webstie](https://docs.rockylinux.org/guides/security/pam/ Source, 2025. ↩︎

  3. configurations of Linux via sssd Website Source, 2025. ↩︎

ProLUG SEC Unit 2 đź”’

Intro đź‘‹

This week covers more implementation of Secure Technical Implementation Guidelines and we look at LDAP (Light Directory Access Protocol) Installation and Setup. This unit also introduces foundational knowledge on analyzing, configuring, and hardening networking components using tools and frameworks like STIGs, OpenSCAP, and DNS configurations.


Discussion Post 1

Preface

There are 401 stigs for RHEL 9. If you filter in your stig viewer for sysctl there are 33 (mostly network focused), ssh - 39, and network - 58. Now there are some overlaps between those, but review them and answer these questions

Question 1. As systems engineers why are we focused on protecting the network portion of our server builds?

Answer

  • Most attacks come through the network
  • Misconfigured services can expose critical ports.
  • Data in transit is vulnerable without proper encryption and access control.
  • External exposure often increases the attack surface for things like brute-force attempts, malware injection, or unauthorized access.

Question 2. Why is it important to understand all the possible ingress points to our servers that exist?

Answer

  • Ingress points = potential paths of attack. Unexpected ingress can be exploited.
  • Zero-trust environments rely on strict control and observability of ingress.
  • Compliance and auditing require accurate records of what’s accessible.

Question 3. Why is it so important to understand the behaviors of processes that are connecting on those ingress points?

Answer

  • Security posture depends on visibility
  • Attackers scan for overlooked vulnerabilities
  • Automation tools (e.g., Ansible, Terraform) can introduce new ingress points unknowingly during updates.
  • Incident response is much faster and more effective when engineers understand the network surface.

Discussion Post 2

Intro to the scenario[^3]

Read this: https://ciq.com/blog/demystifying-and- troubleshooting-name-resolution-in-rocky-linux/ or similar blogs on DNS and host file configurations.

Question

  • What is the significance of the nsswitch.conf file?

Answer

The /etc/nsswitch.conf file controls the order in which name resolution methods are use

Question

  • What are security problems associated with DNS and common exploits? (May have to look into some more blogs or posts for this)

Answer

Core issues with DNS:

Traditional DNS can be spoofed due to a lack of built in verification., Queries and Responses are sent in plaintext making confidentiality an issue., No way to validate the source of the DNS data., Centralized, single point of failure.,

Common Exploits:

Spoofing (False record injection), Flooding (Overwhelming the resolver), Tunneling (Query based Exfiltration), Hijacking (Modifying domain registration data), Typosquatting (Registering similar domains) New phrase for me


Definitions

  • sysctl Linux interface to modify kernel parameters at runtime for system performance and security.
  • nsswitch.conf Configuration file controlling the order of name service lookups (e.g., DNS, files, LDAP).
  • DNS: Domain Name System translates human-readable domain names into IP addresses.
  • Openscap Open-source framework for automated vulnerability scanning, compliance checking, and security auditing.
  • CIS Benchmarks Prescriptive security configuration guidelines provided by the Center for Internet Security.
  • ss/netstat Command-line tools to display network sockets, connections, and statistics on Unix-like systems.
  • tcpdump Command-line packet analyzer for capturing and inspecting network traffic in real-time.
  • ngrep Network packet analyzer like grep, allowing pattern matching on network traffic payloads.

Lab đź§Ş

IP Forwarding

Question

  • Does this system appear to be set to forward? Why or why not?

Answer

  • No. All relevant net.ipv4.conf.*.forwarding and net.ipv4.ip_forward values are set to 0.

Martians

Question

  • What are martians and is this system allowing them?

Answer

  • Martians are packets with invalid or bogus source/destination addresses. This system is not logging them (log_martians = 0), but whether they’re allowed depends on other rules. Logging is disabled.

Kernel Panic Behavior

Question

  • How does this system handle kernel panics?

Answer

  • kernel.panic = 0 means the system won’t auto-reboot on panic. panic_on_oops = 1 indicates it will panic on kernel oops errors. Other panic triggers are mostly disabled.

FIPS Mode

Question

  • Is FIPS mode enabled?

Answer

  • No. crypto.fips_enabled = 0.

Question

  • What should be read about to better understand FIPS?

Answer

  • TODO

Kernel Command Line

Question

  • What are the active boot parameters from /proc/cmdline?

Answer

  • TODO (values include initrd paths, UUIDs, FIPS status not explicitly shown).

Security Settings & STIGs

V-257957 – TCP Syncookies

Question

  • Is the system using TCP syncookies?

Answer

  • Yes. net.ipv4.tcp_syncookies = 1.

Question

  • How to make this setting persistent?

Answer

  • Add net.ipv4.tcp_syncookies = 1 to a file in /etc/sysctl.d/, then run sysctl --system.
V-257958 – ICMP Redirects

Question

  • Is the system accepting ICMP redirect messages?

Answer

  • No. net.ipv4.conf.all.accept_redirects = 0

Question

  • How to harden this?

Answer

  • Add net.ipv4.conf.all.accept_redirects = 0 to /etc/sysctl.d/, then reload settings with sysctl --system.

Question

  • Did you fully understand all parameter meanings?

Answer

  • No. Some were clarified using ChatGPT.

Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis

ProLUG SEC Intro đź”’

Intro đź‘‹

I’ve just started a new Security Engineering course created by Scott Champine through ProLUG. As a graduate of his Linux Administration course and an active contributor to the Professional Linux User Group, I felt compelled to make time for this new course—I’ve learned a great deal from his teachings in the past.

The Course

This is a deep dive into Enterprise Operational Security. That includes topics like compliance, threat management, and system integrity. I’m also helping coordinate and develop a web-book to accompany the course.1

While I already hold several cybersecurity certifications that cover conceptual frameworks and best practices, this course goes much deeper with hands-on labs. We harden systems with STIGs,2 monitor and detect activity on live systems, and troubleshoot compliance issues.

The course spans 10 weeks, with an estimated 100 hours of work to complete the weekly projects and the capstone.


Discord: https://discord.com/invite/m6VPPD9usw Youtube: https://www.youtube.com/@het_tanis8213 Twitch: https://www.twitch.tv/het_tanis ProLUG PSC Repo: https://github.com/ProfessionalLinuxUsersGroup/psc ProLUG PSC Book: https://professionallinuxusersgroup.github.io/psc/ ProLUG Book of Labs: https://leanpub.com/theprolugbigbookoflabs KillerCoda: https://killercoda.com/het-tanis


Footnotes


  1. ProLUG Security Engineering Course Web-Book Web-Book ProLUG, 2025. ↩︎

  2. Secure Technical Implementation Guidelines DoD Cyber Exchange Website ↩︎