I recently finished the first ProLUG Security Engineering Course, designed and delivered by Scott Champine, also known as Het Tanis, from ProLUG. It ran for about 10 weeks and clocked in at roughly 100 hours of focused effort—but honestly, I probably put in more than that once you count the spontaneous study sessions and the many side discussions that came up. A small group of us showed up consistently and really dug into the material, connecting ideas and bouncing thoughts off each other.
The course itself was free and not tied to any official institution, but it was taught by a seasoned industry professional who also teaches at the post-secondary level. Scott clearly cares about the subject and about helping others understand it. That came through in how he delivered the material, and it brought out a real sense of commitment in us too.
On top of just taking the course, I also helped shape it for future learners by starting a version-controlled course book. We had a small group that met weekly to go over edits and review pull requests. A few people even joined just to learn Git so they could contribute, which added to the sense of shared effort and made the experience even better.
One of the things that helped me stay on track was having a study group. There’s a lot of sharp, motivated people in the ProLUG community, and quite a few of them kept up a steady pace through both the course and the book. The regular check-ins and shared discussions made a big difference.
The course itself covered a wide range of topics and gave me a stronger sense of how enterprise security is put together, maintained, and kept resilient. Security isn’t just about ticking boxes—it touches every part of a system. Especially with Linux, where multiple users and external inputs are constantly in play, it doesn’t take much for something to go sideways if you’re not paying attention.
We worked through the process of hardening Linux systems using STIGs—basically long, detailed lists of potential vulnerabilities and how to guard against them. It’s not fast work, but it really forces you to think about each configuration choice.
Patching was another major topic, and not in the usual “just update it” way. We talked about how every change introduces risk, and how important it is to approach patching as part of a controlled, planned process. That includes things like internal repositories, known-good system images, and minimizing surprise behavior from updates.
We also got hands-on with locking down systems: managing ingress and egress, shutting off unnecessary ports, setting up bastion hosts, and building out logging and alerting. We even worked on ways to trap misbehaving users or bots inside chroot jails. One of the others in the group even automated that process with a Bash script for their final project.
We had deep conversations about monitoring too—things like how to design alerts that people can actually respond to, without burning out from constant noise. We looked at log filtering, storage, and what makes a log useful rather than just more clutter.
We also talked about automation and how it can sometimes get away from you. It’s easy for parts of a system to drift out of spec if you’re not careful, especially with orchestration tools. So we looked at how to use infrastructure-as-code and version control to make changes traceable and systems more predictable.
Toward the end of the course, we focused on trust, keys, and certificates. We got practical—generating and managing key pairs, breaking them, fixing them, and eventually building up to TLS certificates. These exercises helped drive home how trust is managed inside systems, especially in setups that lean toward zero trust.
Before this course, I already had a decent background in cybersecurity—some labs, a few certifications—but this gave me something more solid. I now feel like I understand what it means to build security into a system, rather than just bolt it on. I’m more confident setting up and maintaining a hardened Linux environment, and more thoughtful about how to track and manage change over time.
That said, I don’t think I’ve “arrived.” If anything, this course just made me more aware of how much I still have to learn. I’ve moved into that space where I know what I don’t know, and that’s a valuable place to be. It’ll take years to keep digging through it all, but now I’ve got a better starting point—and the confidence to figure things out when new challenges come up.
All in all, this course gave me a deeper appreciation for operational security, and it left me with some solid tools I’ll continue to use. Like with the Admin course before it, I really valued the people I got to work with. I expect we’ll keep exploring these topics together for a long time. And, like always, you make a few good friends along the way.
How many new topics or concepts do you have to go read about now?
Answers
TLS Transport Layer Security: Prior to the course, I was aware of the terminology and had a 30,000 Foot conceptual view. During this course, I was able to zoom in an take a look at the transport and the layers. However, given the shear scale and complexity of the topic. I will have to read through the 1.1, 1.2 and 1.3 Specifications. One of my favorite IT Authors Michael W. Lucas has a book for sale on the topic. https://www.tiltedwindmillpress.com/product/tls/
ZT Zero Trust: I get from a high level view as well. I learned that Zero Trust is a popular buzz-word or a form of jargon for most. Actually drilling down and understanding the many forms and configurations of a ZT network is an immense undertaking. On the side I had done some additional reading about it, for example, I read through some of https://www.cisa.gov/sites/default/files/2023-04/CISA_Zero_Trust_Maturity_Model_Version_2_508c.pdf
and plan to dig deeper into the subject.
Tokenization & Data Masking are two interesting topics. If anyone can recommend materials, I am interested. So far I have just found Wikipedia for the explanations.
DMARC Domain-based Message Authentication Reporting & Conformance: I do have the book Run your own mail server by Michael W. Lucas. I am assuming he covers the topic to some degree.
SPF Sender Policy Framework: Which should be covered by the aforementioned book as well.
TSDB's Time Series Databases: I have heard the concept from this course and in game development. I think the concept is easy to grasp. But I would like to investigate further.
Question
What was completely new to you?
Answer
STIGS for sure. Just prior to starting the course, I was given a glimpse of the Stig’ing process. In the course we were tasked with getting the StigViewer working, downloading specific STIG’s and Implementing hardening while answering prompts about what specifically we were doing. It really helped to have finished the admin course prior to this as it made the objectives more clear to me.
Bastions Hosts Prior to using the one implemented on Scott’ own server, I had not seen this. Drilling into the concept and creating a Bastion in the lab was a nice intro.
Question
What is something you heard before, but need to spend more time with?
Answer
I had heard the Acronyms for many of the concepts prior to this course. In my answer to Question #1, I had already detailed what I will need to dig into after the course is complete.
Think about how the course objectives apply to the things you’ve worked on.
Question
How would you answer if I asked you for a quick rundown of how you would
secure a Linux system?
Answer
First, I’d check open ports using ss -ntulp to see what services are listening and close anything unnecessary.
Next, I’d check how many user accounts exist by running cat /etc/passwd | wc -l, and optionally review users with high UIDs to see who has real login access.
I’d confirm that root login over SSH is disabled by checking /etc/ssh/sshd_config and setting PermitRootLogin no.
Then I’d check for any accounts with empty passwords using awk -F: '($2 == "") { print $1 }' /etc/shadow.
I’d list which users have sudo access by checking the sudo group or reviewing /etc/sudoers.
I would review running services with systemctl list-units --type=service and disable anything that isn’t needed.
Then I’d make sure a firewall is enabled and configured, using firewalld, ufw, or iptables, depending on the system.
I’d update all packages using the system’s package manager like dnf, apt, or yum to ensure known vulnerabilities are patched.
I’d also check file permissions on sensitive files like /etc/shadow and /home/* directories.
If SSH is exposed, I’d install and configure fail2ban to protect against brute-force login attempts.
I’d regularly check system logs like /var/log/auth.log or use journalctl to spot anything suspicious.
Lastly, I’d run a tool like Ansible Lock-Down to audit and find common misconfigurations.
Question
How would you answer if I asked you why you are a good fit as a security
engineer in my company?
Answer
Though I am not a seasoned Security Engineer, I possess a solid understanding of Linux, system hardening, and monitoring techniques, along with a strong foundation in high-level concepts related to ensuring security, reliability, and confidentiality in systems and networks. I am a diligent learner and a prolific documenter, always striving to deepen my knowledge and contribute meaningfully to operational resilience and security best practices.
Frame
Think about what security concepts you think bear the most weight as you
put these course objectives onto your resume.
In this Unit we look at how Certificates and Keys go beyond Asymmetric encryption with Public / Private. We look at how multiple checks and multiple layers of trust must be used in this mad mad world. 1
How do these topics align with what you already know about system security?
Answer
Well I had felt like I had a clear picture of Symmetric and Asymmetric encryption modalities. Furthermore, I had a strong prior understanding of x.509 and SSH where Asymmetric encryption is used. Moreover, the procedure of generating Private and subsequent public keys. However, the verbosity and complexity of the required reading has me scratching my head and looking at more sophisticated modality of key generation and exchange eg. TLS 1.2, 1.3
Question
Were any of the terms or concepts new to you?
Answer
key-transport and/or key-agreement protocols - a method of establishing a shared secret key between two or more parties
where one party creates the key and securely delivers it to the others.
Challenge Values - dynamic, randomly generated numbers or strings used to initiate authentication.
nonce - a unique, random or pseudo-random number used to ensure the security and integrity of data transmitted over a network.
Watch short video about CA and Chain of Trust
Distributed Trust Model
Review the TLS Overview section, pages 4-7 of
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-52r2.pdf
and answer the following questions.
What are the three subprotocols of TLS?
How does TLS apply
Confidentiality
Integrity
Authentication
Anti-replay
Question
What are the three subprotocols of TLS?
Answer
The handshake used to negotiate the session parameters.
Change cipher spec used in TLS 1.0, 1.1, and 1.2 to change the cryptographic parameters of a session.
Alert protocols used to notify the other party of an error condition.
Question
How does TLS apply to:
Confidentiality
Integrity
Authentication
Anti-replay
Answer
Confidentiality
Confidentiality is provided for a communication session by the negotiated encryption algorithm
for the cipher suite and the encryption keys derived from the master secret and random values.
Integrity
TLS uses a cipher suite of algorithms and functions, including key establishment, digital signature, confidentiality, and integrity algorithms. In TLS 1.3, the master secret is derived by iteratively invoking an extract-then-expand function with previously derived secrets, used by the negotiated security services to protect the data exchanged between the client and the serve. In TLS 1.3, only AEAD symmetric algorithms are used for confidentiality and integrity.
Authentication
Server authentication is performed by the client using the server’s public-key certificate, which
the server presents during the handshake.
Anti-Replay
in TLS 1.3 The integrity-protected envelope of the message contains a monotonically increasing sequence number. Once the message integrity is verified, the sequence number of the current message is compared with the sequence number of the previous message.
We generated a 90 day TLS web client certificate. I saved a snippet of the options below.
Activation/Expiration time.
The certificate will expire in (days): 90Extensions.
Does the certificate belong to an authority? (y/N): y
Path length constraint (decimal, -1 for no constraint):
Is this a TLS web client certificate? (y/N): y
Will the certificate be used for IPsec IKE operations? (y/N): y
Is this a TLS web server certificate? (y/N): y
Enter a dnsName of the subject of the certificate:
Enter a URI of the subject of the certificate:
Enter the IP address of the subject of the certificate:
Will the certificate be used for signing (DHE ciphersuites)? (Y/n): y
Will the certificate be used for encryption (RSA ciphersuites)? (Y/n): y
Will the certificate be used for data encryption? (y/N): y
Will the certificate be used to sign OCSP requests? (y/N): y
Will the certificate be used to sign code? (y/N): y
Will the certificate be used fortime stamping? (y/N): y
Will the certificate be used for email protection? (y/N): y
Will the certificate be used to sign other certificates? (Y/n): y
Will the certificate be used to sign CRLs? (y/N): y
Will the certificate be used for signing (DHE ciphersuites)? (Y/n): y
Enter the URI of the CRL distribution point:
X.509 Certificate Information:
Version: 3Serial Number (hex): 32a1646105dcb6229eba87ad4c08a99a2bb92a99
Validity:
Not Before: Mon Jun 02 03:46:43 UTC 2025Not After: Sun Aug 31 03:46:48 UTC 2025Subject: O=prolug
Subject Public Key Algorithm: RSA
Algorithm Security Level: High (3072 bits)Modulus (bits 3072):
00:e8:c7:f5:6e:7c:23:e3:7e:e7:d0:c5:c4:cf:c0:98
23:5f:1e:f6:5f:5d:87:c6:c8:18:13:cb:5e:1b:1a:88
03:98:4d:55:5d:4d:14:cc:78:8d:83:e3:c5:65:16:8c
41:a8:9f:32:ab:f4:47:3f:84:b2:b8:0d:7c:b3:a6:e7
21:59:13:d2:45:40:60:d6:2c:eb:5a:f3:00:0c:e7:36
06:0f:ca:51:04:92:06:91:80:f0:04:52:d2:66:e3:33
11:7b:8e:f7:e3:22:19:83:c8:dc:c8:f9:18:c7:51:4f
38:6a:d8:07:bf:12:02:f4:5e:0d:52:2e:cc:0b:4e:d9
e0:b2:07:9a:cd:39:99:a7:28:42:e4:67:b0:ff:04:2d
f9:13:8c:0f:19:b5:13:ee:59:a3:e7:e8:f7:a1:e9:92
2e:ce:49:23:3c:0a:b4:29:ca:5d:74:6e:9e:09:ea:fd
72:6a:89:6e:5f:29:d6:0a:44:98:1e:2c:39:66:44:11
4f:47:c5:64:a3:0c:84:2b:fd:32:2e:a9:ce:e7:be:b4
7c:3b:e6:b9:23:98:82:ac:86:20:07:4e:59:84:4d:0c
02:38:76:87:ef:f8:17:05:5b:93:79:25:73:fc:18:f5
4e:1d:ff:84:45:10:7d:46:51:69:ae:73:6d:e9:1e:fd
ff:55:5a:78:4d:f6:cd:44:af:22:0f:b0:18:fb:82:b9
f6:aa:3d:2a:08:00:62:d1:9b:28:50:94:39:98:f5:de
f9:cf:3f:d8:ae:72:68:69:f1:46:97:8f:d5:a6:9a:3e
4c:57:37:5f:69:0e:2f:4e:b6:6e:65:a5:2c:f0:5b:c6
c2:ff:43:b7:4e:b7:56:3f:2b:d8:5d:b9:73:15:ca:81
f1:c3:78:2f:8d:4f:fd:e8:2d:6f:2f:2d:f6:b9:e1:a0
11:f2:56:18:02:5b:8e:07:da:19:43:c1:70:bc:7b:8b
82:2b:02:e2:71:6e:30:9b:18:8d:ed:1f:29:59:86:9d
81Exponent (bits 24):
01:00:01
Extensions:
Basic Constraints (critical):
Certificate Authority (CA): TRUE
Key Purpose (not critical):
TLS WWW Client.
TLS WWW Server.
Ipsec IKE.
OCSP signing.
Code signing.
Time stamping.
Email protection.
Key Usage (critical):
Digital signature.
Key encipherment.
Data encipherment.
Certificate signing.
CRL signing.
Subject Key Identifier (not critical):
213b20bf44b3446fb14f6cf72b8c2c03a09e292e
Other Information:
Public Key ID:
sha1:213b20bf44b3446fb14f6cf72b8c2c03a09e292e
sha256:7f76aada143491a8ba0721509a3e49f9e72321ed880f7ee64b8e01172989b3d2
Public Key PIN:
pin-sha256:f3aq2hQ0kai6ByFQmj5J+ecjIe2ID37mS44BFymJs9I=
Does the diagram on page 44 make sense to you for what you did with a certificate authority in this lab?
Answer
Yes it does, we had only setup a portion of this chain of trust, yet it got the idea across of whom we are referring to and how we build a certificate from that referral.
Monitoring systems and alerting when issues arise are critical responsibilities for system operators. Effective observability ensures that system health, performance, and security can be continuously assessed.1
How does the usage guidance of that blog align with your understanding of these three items?
Answer
Though the concepts involved in telemetry are really quite simple, they took me some time to internalize and fully understand.
I can’t say it paralleled my own understanding as my understanding was very limited. Prior to the lectures, if I were to hear the word telemetry, I would think of non GPS tracking techniques or some sort of secret tracking by Palantir.
My simplified outline of these 3 things:
A metric represents a point in time measurement of a particular source
Logs are discrete and event triggered occurrences.
Traces follow a program’s flow and data progression.
Question
What other useful blogs or AI write-ups were you able to find?
When we think of our systems, sometimes an airgapped system is simple to think about because everything is closed in. The idea of alerting or reporting is the opposite. We are trying to get the correct, timely, and important information out of the system when and where it is needed.
What is the litmus test for a page? (Sending something out of the system?)
Answer
The page must be pertaining to an imminent, actionable situation that must be addressed quickly.
Question
What is over-monitoring v. under-monitoring. Do you agree with the assessment of the paper? Why or why not, in your experience?
Answer
Over-monitoring can be compared to hyper-vigilance. Over time it works against you as fatigue or indifference sets in. Furthermore over-monitoring includes the transporting, receiving and dissemination of too much information, causing cognitive overload, leading to poor decision making. Furthermore, the additional information being broadcast leaves a system more susceptible/vulnerable from a security stand-point.
Under monitoring would be lack of contextual reporting, responsiveness and diligence needed in order to keep a system from going down.
From reading this article, it seems to me that one must turn monitoring into a spectrum of detail. Hypercritical indicators like uptime, load, capacity should be reported daily with a pre-determined baseline. Estimations can be made from this allowing for prediction. While major changes –those outside the predicted norm, could trigger alerts. Paging should be reserved for the utmost critical issues.
Question
What is cause-based v. symptom-based and where do they belong? Do you agree?
Answer
Cause based is the analysis/investigation of the root cause of a certain outcome. In the context of systems operating and security, it is finding the vulnerability, infiltration/exfilration point or cause of failure like hitting memory/cpu/disc space limitations.
Symptom based analysis would be to observe the effects of an unknown origins ie. systems going down, data loss etc…
Bringing the system back up, or restoring data from backups does nothing to address the root cause, it only remediates the effects.
Configuration drift is the silent enemy of consistent, secure infrastructure.
When systems slowly deviate from their intended state, whether that be through manual changes, failed updates, or misconfigured automation, security risks increase and reliability suffers.1
What overlap of terms and concepts do you see from this week’s meeting?
Answer
Lifecycle management and Change Control (Change Management).
Change Management is a system for ensuring process and product integrity.
Despite these controls, variation from the norm (configuration drift) is inevitable.
So we must invoke/involve controls in order to catch variation/drift.
In the case of systems, it is bot Misconfigured Systems and Misconfigured Users to induce variation/drift.
Question
What are some of the standards and guidelines organizations involved with configuration management?
Answer
Originally developed by the U.S. Department of Defense to ensure quality, reliability, and integrity in the manufacturing supply chain, configuration management principles were later adopted and expanded upon by standards bodies such as ANSI, ISO, and IEEE. These concepts have since evolved through industry-specific frameworks, including:
ITIL
ISO/IEC
NIST
IEEE
CERN
Question
Do you recognize them from other IT activities?
Answer
For sure.
Baselining Gathering telemetry from a system at its base config
Standards Developing a standard for configuration or procedure to ensure consistent and predictable output
Monitoring and parsing logs is essential to operational intelligence. Computers typically produce immense amounts of data—far more than a human can interpret in real time. To extract meaning from this data, we must intelligently filter event logs into clear, comprehensible, and actionable items.
Achieving this is easier said than done. This unit offers general advice on the art of making complex information comprehensible. 1
There are 14 references at the end of the chapter. Follow them for more information. One of them by Julia Evans 3should be reviewed for question “c”.
Question
What are some concepts that are new to you?
Answer
Core dumps, Memory dumps, or Stack traces.
I have heard the terms before and understand the concepts to a basic degree. I decided to do a bit of further reading to understand each of the dumps and traces so here is a gist.
A core dump is a snapshot of the processes state at the time of downing.
A Memory dump is a snapshot of the Random Access Memory (RAM) at the time of downing.
A Stack trace is the process of tracing function calls through the stack from end (error) to beginning (Call). The way I personally conceptualize this is through comparison to root cause analysis, something I am familiar with.
Host intrusion detection systems (HIDS) or Host Agents
A few ideas from the book2
“Modern (sometimes referred to as “next-gen”) host agents use innovative techniques aimed at detecting increasingly sophisticated threats. Some agents blend system and user behavior modeling, machine learning, and threat intelligence to identify previously unknown attacks.”
“Host agents always impact performance, and are often a source of friction between end users and IT teams. Generally speaking, the more data an agent can gather, the greater its performance impact may be because of deeper platform integration and more on-host processing.”
Question
There are 5 conclusions drawn, do you agree with them? Would you add or remove anything from the list?
Answer
To begin with, here are the conclusions drawn:
“Debugging is an essential activity whereby systematic techniques—not guesswork—achieve results.”
“Security investigations are different from debugging. They involve different people, tactics, and risks.”
“Centralized logging is useful for debugging purposes, critical for investigations, and often useful for business analysis.”
“Iterate by looking at some recent investigations and asking yourself what information would have helped you debug an issue or investigate a concern.”
“Design for safety. You need logs. Debuggers need access to systems and stored data. However, as the amount of data you store increases, both logs and debugging endpoints can become targets for adversaries.”
Firstly, I would like to preface this answer with a disclaimer. I lack the competency to critisize and/or disect O’Relly’s book. With that out of the way. I am going to target the first point.
My only criticism here is that the point is very broad in scope as compared to the more granular and topics specific to this book/chapter.
Question
In Julia Evan’s debugging blog, which shows that debugging is just another form of troubleshooting, what useful things do you learn about the relationship between these topics?
Answer
Both debugging and troubleshooting involve:
Proceduralization: If a clear procedure doesn’t exist, begin documenting and formalizing the process into a repeatable method.
Humility: Acknowledge that you might be the cause of the problem. This is especially important in development.
Methodical Experimentation: Form a hypothesis, then devise a controlled method to test it—use unit tests in development, or targeted scripts and commands when debugging.
One Step at a Time: Tackle problems incrementally—“eat the elephant one bite at a time.”
Strong Foundations: Write debuggable code and build robust systems. A good foundation makes issues easier to isolate.
More Is Better: Verbose error messages provide more clues—enable detailed output when possible.
Question
Are there any techniques you already do that this helps solidify for you?
Answer
Yes, I try to create excellent documentation with respect for my future self or others I may need to share it with. This involves numbered procedural steps with inputs and outputs, if that is the nature of the work. Otherwise, I write in a general manner that is legible to others.
What interesting or new things do you learn in this reading? What may you want to know more about?
Answer
Interesting Concept:
One of the general themes I gathered from this article is low cognitive overhead. It’s a concept I’m very familiar with from accessibility-focused design. Too much information overwhelms our ability to observe, absorb, and decide effectively.
For example, public signage must be simple, legible, and self-descriptive through clear graphic composition—guiding the eye where to look first and in which direction to proceed. This closely parallels the need for simplicity in monitoring and alerting systems. When such systems become overly complex, they can lead to misinterpretation, miscommunication, and fatigue due to information overload.
Information must be derived and presented in a way that is easily consumable, where errors are unmistakable—without exhausting the viewer.
New concepts
White box monitoring systems vs. Black box monitoring systems.
Conducting ad hoc retrospective analysis (ie. Debugging)
(4 Golden signals) Latency, Traffic, Saturation, Errors
This one in particular relates strongly to the USE acronym I had recently picked up from Het, Utilization, Saturation, Errors
Question
What are the “4 golden signals”?
Answer
Latency
Traffic
Saturation
Errors
Question
After reading these, why is immutability so important to logging?
Answer
Tamper Resistance: Immutable logs cannot be altered or deleted without detection, which helps prevent covering up malicious activity or mistakes.
Auditability: Logs serve as historical records. If they can be changed, audits and investigations lose their value.
Debugging Integrity: Developers and operators rely on logs to trace errors. Mutable logs can introduce false positives or hide root causes.
Regulatory Compliance: Standards like HIPAA, PCI-DSS, and GDPR often require tamper-evident or immutable log storage.
Forensic Value: In incident response, immutable logs serve as trustworthy evidence for timelines and breach analysis.
Question
What do you think the other required items are for logging to be effective?
Answer
In order to be effective, log must be:
Trustworthy: Logs should be immutable.
Time-stamped: Every entry needs a synced timestamp.
Clear levels: Use INFO, ERROR, DEBUG, etc., to show importance.
Structured: Format logs so machines and humans can read them.
Context-rich: Include request IDs, user info, IPs—anything that helps trace the story.
Centralized: Gather logs in one place for easy searching and alerting.
Searchable: You should be able to find issues fast with good queries.
Safe: Control who can see logs—some contain sensitive info.
Durable: Logs shouldn’t disappear in a crash—use backups and redundancy.
Noise-controlled: Avoid flooding—rotate logs and cap log rates.
Repositories and Patching is the general theme of this unit. We dive into creating internally audited repositories for safe enterprise operation. This configuration allows for greater security scrutiny and compatibility testing before schedule patching takes place. For example, a company would like to skip every other version of a package in order to reduce update cadence, giving more time for assessment, correction and troubleshooting of internal software. Much like any enterprise decision regarding cost and effort and analysis must take place. 1
Review the rocky documentation on Software management in Linux.2
Question
What do you already understand about the process?
Answer
I had gained a decent understanding of the package management systems of both RHEL and DEBIAN based distros through both studying for the LPIC 1 and completing the ELAC course through this group. From this I had learned about versioning and dependency management including modules. The differences between RPM, YUM, and DNF and how these evolutions of package management came into being.
Question
What new things did you learn or pick up?
Answer
I did not understand the depth to which RPM packages tracked package data.
I did not know that headers could be edited in the package in order to add custom labeling.
I had basic awareness and high level understanding of internal package management. Prior to our lecture, I had not seen an internal package server/relay be setup/configured.
I did not know much about EPEL beyond having to call it in the CLI for additional packages outside of DNF.
Question
What are the DNF plugins? What is the use of the versionlock plugin?
Answer
DNF plugins are external modules that extend the functionality of DNF. I have some experience activating the COPR repo using DNF plugins. Furthermore, versionlock is a specific plugin that allows for an admin/engineer/dev to lock a particular package to a specified version so that it is not mistakenly changed/overwritten. This is typically done in software development in my experience, with software development many dependencies might be needed. Breaking updates were common place, so most modern software projects contain a lock file that indicates what specific dependency version must be used in order to build or interpret the project.
Question
What is an EPEL? Why do you need to consider this when using one?
Answer
EPEL stands for Extra Packages for Enterprise Linux. These packages exist outside of the core enterprise offering and are therefore potentially issue causing. Unlike core packages, these extra packages could introduce possible incompatibility issues, resulting in rejection by endorsed support specialists.
After looking at that, how does patching a fleet of systems in the enterprise differ from pushing “update now” on your local desktop?
Answer
Patching a fleet of systems involves the systematic updating of installed software to fix security vulnerabilities, improve stability, and introduce minor enhancements. The process is governed by organizational policies to ensure uptime and compliance.
Because changes affect many systems simultaneously, patching acts as an amplifier of problems if not handled carefully. Therefore, enterprise patching must be strategic, managed, and auditable.
In contrast, running updates on a personal system is typically an automated, low-risk operation, with little concern for version conflicts or trust in the source. Additionally, modern filesystems like ZFS and Btrfs provide the ability to quickly roll back changes if something fails.
Question
What seems to be the major considerations? What seems to be the major roadblocks?
What file is fixed for all of them to be remediated?
# Install httpd on your Rocky serversystemctl stop wwclient
dnf install -y httpd
systemctl start httpd
Check STIG V-214234
Question
What is the problem?
Answer
Event logging can fail.
Question
What is the fix?
Answer
This can be fixed by implementing failure alerts.
Question
What type of control is being implemented?
Answer
This type of control would be a Detective Control
Question
Is it set properly on your system?
Answer
No, this is not setup by default, it must be implemented after installation.
Check STIG V-214248
Question
What is the problem?
Answer
By default, sensitive information including security controls may be available to all users because privelaged user access controls have not been implemented.
Question
What is the fix?
Answer
Develop roles for privelaged users and define access policies.
Question
What type of control is being implemented?
Answer
This is a preventative type control.
Question
Is it set properly on your system?
Answer
No, not by default. Of course super user has special priveledges. However, beyond that there are no other tiers of access.
Question
How do you think SELINUX will help implement this control in an enforcing state? Or
will it not affect it?
Answer
SELINUX allows for strong group creation and control. So it would help batch users and apply granular control mechanisms.
# Start out by removing all your active reposcd /etc/yum.repos.d
mkdir old_archive
mv *.repo old_archive
dnf repolist
# Mount the local repository and make a local repomount -o loop /lab_work/repos_and_patching/Rocky-9.5-x86_64-dvd.iso /mnt
df -h #should see the mount pointls -l /mnt
touch /etc/yum.repos.d/rocky9.repo
vi /etc/yum.repos.d/rocky9.repo
[BaseOS]
name=BaseOS Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
baseurl=file:///mnt/BaseOS/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[AppStream]
name=AppStream Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
baseurl=file:///mnt/AppStream/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
#Save with esc :wq or “shift + ZZ”
Question
Do the paths you’re using here make sense to you based off what you saw
with the ls -l? Why or why not?
Answer
TODO
chmod 644 /etc/yum.repos.d/rocky9.repo
dnf clean all
# Test the local repositorydnf repolist
dnf --disablerepo="*" --enablerepo="AppStream" list available
Approximately how many are available?
dnf --disablerepo="*" --enablerepo="AppStream" list available | nl
dnf --disablerepo="*" --enablerepo="AppStream" list available | nl | head
dnf --disablerepo="*" --enablerepo="BaseOS" list available
Approximately how many are available?
dnf --disablerepo="*" --enablerepo="BaseOS" list available | nl
dnf --disablerepo="*" --enablerepo="BaseOS" list available | nl | head
# Try to install somethingdnf --disablerepo="*" --enablerepo="BaseOS AppStream" install gimp
hit “n”
Question
How many packages does it want to install?
Answer
TODO
Question
How can you tell they’re from different repos?
Answer
TODO
# Share out the local repository for your internal systems (tested on just this one system)rpm -qa | grep -i httpd
systemctl status httpd
ss -ntulp | grep 80lsof -i :80
cd /etc/httpd/conf.d
vi repos.conf
<Directory "/mnt">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
Alias /repo /mnt
<Location /repo>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Location>
systemctl restart httpd
vi /etc/yum.repos.d/rocky9.repo
###USE YOUR HAMMER MACHINE IN BASEURL###
[BaseOS]
name=BaseOS Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
#baseurl=file:///mnt/BaseOS/
baseurl=http://hammer25/repo/BaseOS/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[AppStream]
name=AppStream Packages Rocky Linux 9
metadata_expire=-1
gpgcheck=1
enabled=1
#baseurl=file:///mnt/AppStream/
baseurl=http://hammer25/repo/AppStream/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Question
Do the paths you’ve modified at baseurl make sense to you? If not, what do you need to better understand?
dnf clean all
dnf repolist
# Try to install somethingdnf --disablerepo="*" --enablerepo="BaseOS AppStream" install gimp
You’ve set up a local repository and you’ve shared that repo out to other systems that might want to
connect. Why might you need this if you’re going to fully air-gap systems? Is it still necessary even if
your enterprise patching solution is well designed? Why or why not?
Answer
We need a unified checkpoint that ensures secure conformity of a package before patching air-gapped systems. Air-gapped systems are not eternally disconnected, they can be connected to other systems in a highly controlled manner.
Question
Can you add the Mellanox ISO that is included in the /lab_work/repos_and_patching section to be a
repository that your systems can access? If you have trouble, troubleshoot and ask the group for
help.
Answer
Yes you can, it must be given a special header and be registered aka. Packaged into the local repo in order for other package management systems to see it.
Or find some on your own about air-gapped systems.
Question
What seems to be the theme of air-gapped systems?
Answer
Air gapped systems are highly controlled and isolated systems. The degree of isolation directly correlates to the level of operational burden as modern productive systems are typically highly connected to either LANs and/or WANs.
Blocking/Limiting/Bottlenecking Network Traffic
Limiting Services to Bare Essentials
Mitigating Data Egress
Quardening off un-expected behavior
Logging use events
Question
What seems to be their purpose?
Answer
To limit attack surface, mitigate malicious access and/or data infiltration/exfiltraion
Question
If you use google, or an AI, what are some of the common themes that come up when asked about air-gapped or bastion systems?
Answer
Common Themes in Air-Gapped Systems
Data Transfer Procedures
Patch Management & Updates
Logging and Auditing
Threat Models
Authentication & Access
Compliance & Certification
Operational Burden
Common Themes in Bastion Hosts
Network Segmentation
Hardened OS Configuration
Jump Host Architecture
Access Control & MFA
Monitoring and Alerting
Change Management
Shared Themes
Both require strict access control
Emphasis on tamper resistance and detection
Tradeoffs between security vs. usability
Often part of zero-trust or defense-in-depth architectures
Jailed SSH users: Using chroot in sshd_config to restrict access.
systemd-nspawn: Lightweight containers for sandboxed environments.
Flatpak / Snap: Sandboxed app delivery systems for desktop applications.
Related Tools & Commands
chroot, unshare, setfacl, auditd
firejail, bwrap, systemd-nspawn
docker, podman, lxc-start
Question
Can you enumerate the methods of jailing users?
Answer
Yes there are 5 possible avenues that I know of.
Question
Can you think of when you’ve been jailed as a Linux user? If not, can you think of the useful ways to use a jail?
Answer
No I have not experienced being jailed as a user. However, if I could think of some use-cases, perhaps one would be as a honeypot for observability. Another usecase I think could work would be to trap crawlers/bots.
PAM are a modular and flexible framework for integrating authentication methods into applications. By seperating / abstracting authentication mechanisms from application code, PAM allows admins to manage authentication policies centrally. PAM also allows from customized authentication processes (Security through obscurity)
Read about active directory (or LDAP) configurations of Linux via sssd3 👍
Question
Why do we not want to just use local authentication in Linux? Or really any system?
Answer
Local authentication presents several problems. Firstly, there is no federated access, so there is fragmentation of systems. Secondly, scalability is an issue as each local system manages that local system’s users, requiring individual account provisioning and password management. Thirdly, it complicates auditing and compliance, since there is no centralized logging or consistent policy enforcement. Additionally, stale or orphaned accounts can accumulate unnoticed, increasing security risks. Finally, it prevents the implementation of modern security practices such as single sign-on (SSO), multi-factor authentication (MFA), and role-based access control across a distributed environment.
Validate Certificate Chains – Ensure that certificates used for PKI-based authentication are properly validated by building a complete certification path to a trusted root.
Associate Certificates with User Accounts – Confirm that every authentication certificate is explicitly mapped to a valid user account to maintain identity integrity.
Restrict Credential Caching Duration – Limit the validity period of cached authentication credentials to a maximum of 24 hours to reduce risk in the event of compromise.
PAM Pluggable Authentication Modules provide a flexible mechanism for authenticating users on Unix-like systems.
AD Active Directory is Microsoft’s centralized directory service for authentication, authorization, and resource management.
LDAP Lightweight Directory Access Protocol is an open, vendor-neutral protocol for accessing and maintaining distributed directory information services.
sssd System Security Services Daemon provides access to remote identity and authentication providers like LDAP or Kerberos.
oddjob A D-Bus service used to perform privileged tasks on behalf of unprivileged users, often for domain enrollment or home directory creation.
krb5 Kerberos 5 is a network authentication protocol that uses tickets for securely proving identity over untrusted networks.
realm/realmd A tool that simplifies joining and managing a system in a domain like Active Directory or IPA using standard services.
wheel (system group in RHEL):** A special administrative group whose members are allowed to execute privileged commands using sudo.
You will likely not build an LDAP server in a real world environment. We are doing it for understanding and ability to complete the lab. In a normal corporate environment this is likely Active Directory.
To simplify some of the typing in this lab, there is a file located at /lab_work/identity_and_access_management.tar.gz that you can pull down to your system with the correct .ldif files.
[root@hammer1 ~]# cp /lab_work/identity_and_access_management.tar.gz .[root@hammer1 ~]# tar -xzvf identity_and_access_management.tar
dn:olcDatabase={1}monitor,cn=configchangetype:modifyreplace:olcAccessolcAccess:{0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"read by dn.base="cn=Manager,dc=prolug,dc=lan" read by * nonedn:olcDatabase={2}mdb,cn=configchangetype:modifyreplace:olcSuffixolcSuffix:dc=prolug,dc=landn:olcDatabase={2}mdb,cn=configchangetype:modifyreplace:olcRootDNolcRootDN:cn=Manager,dc=prolug,dc=landn:olcDatabase={2}mdb,cn=configchangetype:modifyadd:olcRootPWolcRootPW:{SSHA}s4x6uAxcAPZN/4e3pGnU7UEIiADY0/Obdn:olcDatabase={2}mdb,cn=configchangetype:modifyadd:olcAccessolcAccess:{0}to attrs=userPassword,shadowLastChange bydn="cn=Manager,dc=prolug,dc=lan" write by anonymous auth by self write by * noneolcAccess:{1}to dn.base="" by * readolcAccess:{2}to * by dn="cn=Manager,dc=prolug,dc=lan" write by * read
## LDAP Defaults## See ldap.conf(5) for details# This file should be world readable but not world writable.#BASE dc=example,dc=com#URI ldap://ldap.example.com ldap://ldap-master.example.com:666#SIZELIMIT 12#TIMELIMIT 15#DEREF never# When no CA certificates are specified the Shared System Certificates# are in use. In order to have these available along with the ones specified # by TLS_CACERTDIR one has to include them explicitly:TLS_CACERT /etc/pki/tls/ldapserver.crt
TLS_REQCERT never
# System-wide Crypto Policies provide up to date cipher suite which should# be used unless one needs a finer grinded selection of ciphers. Hence, the# PROFILE=SYSTEM value represents the default behavior which is in place# when no explicit setting is used. (see openssl-ciphers(1) for more info)#TLS_CIPHER_SUITE PROFILE=SYSTEM# Turning this off breaks GSSAPI used with krb5 when rdns = falseSASL_NOCANON on
SSSD can connect a server to a trusted LDAP system and authenticate users for access to
local resources. You will likely do this during your career and it is a valuable skill to work with.
This week covers more implementation of Secure Technical Implementation Guidelines and we look at LDAP (Light Directory Access Protocol) Installation and Setup.
This unit also introduces foundational knowledge on analyzing, configuring, and hardening networking components using tools and frameworks like STIGs, OpenSCAP, and DNS configurations.
There are 401 stigs for RHEL 9. If you filter in your stig viewer for sysctl there are 33 (mostly network focused), ssh - 39, and network - 58. Now there are some overlaps between those, but review them and answer these questions
Question 1. As systems engineers why are we focused on protecting the network portion of our
server builds?
Answer
Most attacks come through the network
Misconfigured services can expose critical ports.
Data in transit is vulnerable without proper encryption and access control.
External exposure often increases the attack surface for things like brute-force attempts, malware injection, or unauthorized access.
Question 2. Why is it important to understand all the possible ingress points to our servers that
exist?
Answer
Ingress points = potential paths of attack. Unexpected ingress can be exploited.
Zero-trust environments rely on strict control and observability of ingress.
Compliance and auditing require accurate records of what’s accessible.
Question 3. Why is it so important to understand the behaviors of processes that are
connecting on those ingress points?
Answer
Security posture depends on visibility
Attackers scan for overlooked vulnerabilities
Automation tools (e.g., Ansible, Terraform) can introduce new ingress points unknowingly during updates.
Incident response is much faster and more effective when engineers understand the network surface.
What is the significance of the nsswitch.conf file?
Answer
The /etc/nsswitch.conf file controls the order in which name resolution methods are use
Question
What are security problems associated with DNS and common exploits? (May have
to look into some more blogs or posts for this)
Answer
Core issues with DNS:
Traditional DNS can be spoofed due to a lack of built in verification.,
Queries and Responses are sent in plaintext making confidentiality an issue.,
No way to validate the source of the DNS data.,
Centralized, single point of failure.,
Common Exploits:
Spoofing (False record injection),
Flooding (Overwhelming the resolver),
Tunneling (Query based Exfiltration),
Hijacking (Modifying domain registration data),
Typosquatting (Registering similar domains) New phrase for me
What are martians and is this system allowing them?
Answer
Martians are packets with invalid or bogus source/destination addresses. This system is not logging them (log_martians = 0), but whether they’re allowed depends on other rules. Logging is disabled.
kernel.panic = 0 means the system won’t auto-reboot on panic. panic_on_oops = 1 indicates it will panic on kernel oops errors. Other panic triggers are mostly disabled.
I’ve just started a new Security Engineering course created by Scott Champine through ProLUG. As a graduate of his Linux Administration course and an active contributor to the Professional Linux User Group, I felt compelled to make time for this new course—I’ve learned a great deal from his teachings in the past.
This is a deep dive into Enterprise Operational Security. That includes topics like compliance, threat management, and system integrity. I’m also helping coordinate and develop a web-book to accompany the course.1
While I already hold several cybersecurity certifications that cover conceptual frameworks and best practices, this course goes much deeper with hands-on labs. We harden systems with STIGs,2 monitor and detect activity on live systems, and troubleshoot compliance issues.
The course spans 10 weeks, with an estimated 100 hours of work to complete the weekly projects and the capstone.