Wednesday, January 25, 2023

How Wifi Works

I've been following the MacAdmins Conference on YouTube for a few years and many of their panels are just excellent. Even if you don't use any products from Apple, many of their topics are things that any systems administrator might need to learn, like networking, documentation, and project management.

I have recommended one of their videos many times. It does a great job of explaining the physics underneith wifi and I think that really helps people appreciate just how complex it all is. It is also very helpful for building an understanding of what issues you may come across and how to correct them.

Sunday, October 9, 2022

FileWave and Let's Encrypt

Let's Encrypt offers free SSL certificates. These are usually used for websites, but they can be used for other things. Here I demonstrate how I made them function with FileWave. This removes the need to (a) manually install new certificates every year and (b) pay for those certificates.

For the unfamiliar, FileWave is a tool for managing your endpoint computers. It can send files, run scripts, install programs, update the OS, and other "overhead" tasks for Windows and MacOS. It can also act as an MDM for any of Apple's platforms (e.g. MacOS, iOS, iPadOS, etc.) as well as Android. In order to function properly, it needs to have secure connections between the endpoint devices and the server which coordinates these actions. Usually you would buy a certificate to achieve this and have to replace it every year.

However, Let's Encrypt intentionally designed their system so you could automate the renewals and they don't charge for their certificates. This makes it the perfect tool for eliminating this manual work and reduce your upkeep costs. I run FileWave on CentOS and I use Certbot to automate renewals with Let's Encrypt, so I'll show how I used those tools. If you're running a FileWave server on a Mac, these general ideas should be easily adaptable. The Certbot website gives directions on how to install it on Macs using Homebrew.

First: Install Certbot

Go to the command line on your FileWave server and install certbot. You can find directions on Certbot's website. Specifically, I followed the directions for CentOS 7 and "other" applications.

Make sure that any firewall or packet filtering settings on your server are going to allow Certbot to work. For CentOS 7, I used these commands:


sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

Second: Get A Certificate

At this point, you should be able to get a certificate for the server. Remember that it must have a public IP and a publicly resolvable hostname. Otherwise, Let's Encrypt can't issue it a certificate. To get the certificate, run this command and answer the questions.


sudo certbot certonly --standalone

Assuming your hostname is filewave.example.com, then you'll have certificates in /etc/letsencrypt/live/filewave.example.com. This is fine for some programs, but the FileWave server needs to be "tricked" into using it. That takes a few steps. First, move the original self-signed certificate out of the way. Second, replace it with the certificate that Let's Encrypt signed for you. You can do that with these commands:


sudo -s
cd /usr/local/filewave/certs
mv server.key server.key_bak
mv server.crt server.crt_bak
cp /etc/letsencrypt/live/fielwave.example.com/fullchain.pem server.crt
cp /etc/letsencrypt/live/filewave.example.com/privkey.pem server.key
/usr/local/bin/fwcontrol server restart
exit

At this point, you might be asking why I didn't just use symbolic links. I tried that first, but the dashboard in FileWave Admin claimed that an SSL certificate wasn't installed.

Third: Automate Certificate Renewals

Lastly, to make sure the certificates renew themselves a few weeks before they expire, you'll need to make a script to renew the certificates and move them into place periodically. You could run this every day or every week, as you prefer. You'll need to adjust the script's FQDN variable to be the fully-qualified domain name of your server, but it otherwise looks like this:


#!/bin/bash
FQDN="filewave.example.com"
/bin/certbot renew
cp -uf /etc/letsencrypt/live/${FQDN}/fullchain.pem /usr/local/filewave/certs/server.crt
cp -uf /etc/letsencrypt/live/${FQDN}/privkey.pem /usr/local/filewave/certs/server.key
yes | /usr/local/filewave/python/bin/python /usr/local/filewave/django/manage.pyc update_dep_profile_certs
/usr/local/bin/fwcontrol server restart
exit 0

Save the script at /usr/local/bin/certbot-renew.sh. Also, run "sudo chmod +x /usr/local/bin/certbot-renew.sh" to make sure it is executable. Then make it run every morning by adding this line to the bottom of /etc/crontab:


0 5 * * 6 root /usr/local/bin/certbot-renew.sh

References

Some of the above was put together thanks to things I read from the following sources.

  1. https://community.letsencrypt.org/t/script-that-has-been-working-for-years-stopped- working-after-feb/122142
  2. https://github.com/nycon/filewave-installer/blob/main/filewaveAIO.sh

Sunday, April 4, 2021

Discussions on Staffing I.T. Departments

Note: In addition to this article, you may wish to read what I wrote on this topic back in 2014.

Several times each year, I find someone asking how large of an I.T. department they should have. Typically it is someone in the I.T. department trying to navigate this question so they can advise decision makers about their budget and/or organizational structure. This is a complicated question and sometimes the answers aren’t accepted because the intuition of the various people in these conversations can be very different.

What I’m going to do here is try to provide a neutral perspective that helps the involved parties have a constructive conversation. I’ll avoid error prone simplifications such as a devices-per-tech ratio, my personal intuition, and comparisons to similar organizations. My process takes a while, but if you stick with it I think it will help. It is based on inspiration from other neutral parties. I’ve included two of those sources in the notes at the end of this article. I encourage you to look at those worksheets to get an idea of how the process can look. Be aware that they may make assumptions that are different in your particular environment. By contrast, I propose a process that you can adapt to your individual situation.

Step 1: Conversations and Goal Setting

Much like a good Disaster Recovery Plan or Business Continuity Plan, the institution needs to start with it’s objectives. Here are some questions to get the conversation started:

  • What services are to be provided?
  • Which of those services are custom made and which are commodities?
  • What is the scale of each service? Is it only used by some secretarial staff, all employees, or the public?
  • How long of an outage is the institution willing to allow for each service? How frequently?
  • Are all matters of routine upkeep expected to happen during off-hours? If so, when are off-hours?
  • How many end-point devices will be in active service at any one time?
  • How long is an acceptable waiting period between asking for technical assistance and receiving it?
  • During what hours is technical support expected?
  • Will you provide support to guests, such as people connecting to your wifi or projectors?
  • How many templates of devices will you have? For example, perhaps you have high school student chromebooks and elementary school classroom iPads and cafeteria point-of-sale computers and office secretary computers and so on.
  • How many “bespoke” computers will you have which require custom attention? For example, does the PC running athletics events live streaming or the HVAC system require unique software setups that are not automated and centrally controlled?
  • Do you force all end-users to store data on backed-up servers or is valuable data stored directly on their end-point devices? If the data is on the end-point devices, do you expect technical support to recover the data if the device is upgraded, replaced, or damaged?

Taking a closer look at those questions, you’ll find that the answers aren’t always obvious. Consider the question “How many end-point devices will be in active service at any one time?” This could include desktop computers in offices, chromebooks and tablets assigned to students, labs, and even the computers that run software for your test scanning software, athletics event livestreaming system, and HVAC controls. Do teachers have a mobile device assigned to them as well as a desktop device in their classroom? Do you count the “spare” devices that you give to users when their current device breaks?

Let’s look at the questions about when outages can occur and when technical support is expected. If you’re answering for a school, this may seem obvious. Technical support is only needed when there are students around and outages can happen when class isn’t in session, right? Do you mind an outage while teachers are writing substitute teacher plans and using the copier 30 minutes after school dismisses? Will you expect technical support for the parent trying to connect their phone to your wifi during a basketball game at 6pm? How does the institution feel about an upgrade starting at 5pm which also happens to cause the livestream of a basketball game to “drop” for 15 minutes? Will the superintendent want technical support at Board of Education meetings at 7:30pm? Should system upgrades happen on Sundays in order to avoid impacting classes? If so, will you have any athletics events which are livestreamed and offer wifi to visiting parents?

I hope I’ve shown that there are a lot of situations that we take for granted and might not consider at first. This is why the goals should be defined up front. Otherwise, everyone will be unhappy with the results: management, I.T. staff, and the people that they serve alike.

Step 2: Making Lists and Numbers

Start simple. Make a list of every service you can remember. Give yourself a week or two to think of them all. Ask others to add to it. Look at the calendar and ask what services you need to worry about in each month, quarter, or season.

Do the same for every hardware category. Start with the obvious: desktops, laptops, tablets, printers, network switches, wifi access points, etc. You'll eventually remember things you don’t think about often. Phones, PA systems, fire alarms, cafeteria point-of-sale computers, copiers, fax machines, etc. are all easy to overlook at first but remember later, after you've walked through that office for unrelated reasons. Look through your asset management system (a.k.a. inventory database) and see what you might have missed. Make it a constant thought for several weeks.

As you find items, fill in a quantity where relevant (e.g. copiers but not software), the duration that your institution would be willing to have it unavailable (a.k.a. "return to service" time or RTS), and how much time it takes to maintain each day, week, month, and/or year.

For example, I might say that we have 10 copiers, we can't go without them for a full day (i.e. RTS tolerance is 8 hours), at least one needs service every month, it takes about 0.5 - 3 hours each time, and so on. I might also say that we have uniFLOW for managing those copiers, that it takes about 4 hours per month to manage the application, and another hour or so per month to manage its OS (Windows updates, etc.) So now I can see that the service of "copiers" takes about 5.5 to 8 hours per month of personnel time. I'll take the high number, otherwise I'm not planning appropriately for the target RTS. That 8 hours/month is roughly 0.06 FTE for regular operations. To come to that conclusion, I assumed 4 weeks per month. I also assumed 35 working hours per employee day after removing lunch breaks. That makes:

8 hours / (4 weeks x (35 hours/week)) = 0.057

Replacements, which happen every 5 years or so, are obviously going to be a larger drain on personnel time. So I make a note of that in this section of my data.

Do this for each service in the list that you’ve made. In this way, you quantify each service's required personnel. Right now, 0.06FTE seems like a rounding error, but it will add up. If we decide to hire more staff, it will also help us decide on the division of job duties throughout the department’s positions.

Next, calculate the impact of sick days and vacation time. For example, maybe you give 4 weeks of vacation time annually and assume each employee takes 1 to 2 weeks of sick and personal time annually. So that makes 46 out of 52 weeks, or 46/52 of a year, or about 88-89% of the year that any employee is present. This is very rough, since I'm working in weeks and not actual work days on the calendar. Now that means you'll need to increase any employee requirements by about 10% in order to continue to maintain expected levels of service during vacations, etc. The amount of vacation time and number of holidays your institution gives will influence the math, so the above is only a demonstration.

Step 3: Reviewing and Revising

When you reach this point, your team of stakeholders will have a spreadsheet full of data, justifications, and the tools for transparent conversations with Human Resources and the budget making leadership. Now instead of opposing opinions, people working in good faith can have informed conversations. You can have conversations such as:

  • What is the value of increasing the staffing budget vs. decreasing the RTS goal?
  • Should you consider changing the guidelines for when scheduled outages may occur?
  • What services should be outsourced to keep your limited staff focused on the core mission?

In essence, you have built the formula and the data that goes into it. You can now “turn the knobs” to change the outputs and see what you might want to achieve and what you’re willing to pay (in money or time) to get there.

This process can also be used in future conversations about adding services. Want to change from unmanaged copiers to a system with accountability, printing limits, automation, and more? Do the math to figure out the impact on your FTE for different levels of expected service, different RTS targets, handling it in-house vs. outsourcing the service, etc. This doesn’t just address one conversation. It equips you to have better conversations internally and with vendors about any projects you may consider in the future.

Footnote: Outsourcing

It is worth noting that outsourcing a service reduces the staff necessary, but it doesn’t remove all of the staff time related to it. Continuing with the example above, if staff can’t login to the copiers, the I.T. department will spend time receiving the trouble-report, confirming it, testing if it is a problem caused by their equipment or the vendor’s, and then finally calling the company which has the service contract. If an issue happens every month, that could be 1 - 8 hours per month, depending on the system’s design. They still save time performing the hardware repairs, but the other steps are still handled by the internal I.T. department. Also, outsourcing can have a negative effect on the RTS. If the person who will make the repair has to drive for two hours to get to your office, that is lost productivity. So the question of outsourcing can cut both ways. I recommend considering it for narrow and specialized services, such as copiers, HVAC, computer controlled lighting, phone services, etc. I recommend staying away from it for more flexible tasks, such as general technical support, systems administration, programmers, etc. and issues that are core to your institution’s mission.

Footnote: Inspirations

Here are some of the documents that formed my thinking. If you review them carefully, you can “see” the logic I describe above woven through the math of the worksheets. However, these worksheets contain a lot of invisible assumptions. I offer the method above as a way to adapt the philosophy of these worksheets to your particular environment.

Tuesday, March 30, 2021

Clearing User Files on Macs

In some environments, it is desirable to clear all the user created and downloaded content from a Mac when the user logs out. Perhaps there is only one generic account or you're trying to strongly encourage users to only store things on servers or online services like Google Drive. To create this effect in my environment, I wrote a LaunchAgent and a configurable shell script. I've tested this up to MacOS 10.12, a.k.a. Sierra, but it will probably work on newer versions as well.

To start, the "engine" of this system is the following shell script. Place the code in the file /usr/local/bin/clear_local_files.sh and make it executable. You might need to make this directory manually. You can do that with mkdir -p /usr/local/bin && chmod 755 /usr/local/bin. When you finish pasting the following code into your preferred text editor (BBEdit is a great option), you can save the file clear_local_files.sh to that location. Then use chmod +x /usr/local/bin/clear_local_files.sh to make it executable.


#!/bin/sh
#
# This script will clear away a lot of the files that users are likely
# to leave behind on the local disk.  This is meant as a way to encourage
# users to store files on the server, so that they aren't accidentally
# lost when a computer breaks down, is replaced, is upgraded, etc.
#

# The following is a list of directories at the root of the user's home
#   which will be cleared.
# Note:  The lack of Library allows account customizations to stay on the
#   local disk.
# Note:  Some sub-items will be moved back in a following setting.
# Warning:  Never put " in " in this list, as it will cause a syntax
#   error with loops.
clearDirs=( "Desktop" "Documents" "Downloads" "Movies" "Music" "Pictures" "Public" "Sites" )

# The following is a list of items to preserve in the user's home.
# Warning:  Never put " in " in this list, as it will cause a syntax
#   error with loops.
# Warning:  Be careful with spaces, colons, and slashes in file names.
keepDirs=( "Documents/Microsoft User Data" "Movies/iMovie data folders" "Movies/iMovie Events.localized" "Movies/iMovie Projects" "Movies/iMovie Library.imovielibrary" "Movies/iMovie Theater.theater" "Music/iTunes" "Pictures/iPhoto Library.photolibrary" "Pictures/Photos Library.photoslibrary" "Public/Drop Box" "Sites/images" "Sites/index.html" "Sites/Streaming" )

# This should be executed in the home directory of the current user.
cd ~

# Make a place to hide things.
mkdir ~/.backup0

# Move things into that hidden location
for item in "${clearDirs[@]}"
do
        if [ -e "${item}" ];
        then
                mkdir -p .backup0/"${item}"
                mv "${item}"/* .backup0/"${item}"/
        fi
done
        
# Move the things we're preserving out of the hidden location and back where they're supposed to be.
for item in "${keepDirs[@]}"
do
        if [ -e ".backup0/${item}" ];
        then
                mv ".backup0/${item}" "${item}"
        fi
done

# Get rid of anything that has been around too long.
if [ -e ~/.backup9 ];
then
        rm -rf ~/.backup9
fi

# "Age" each hidden backup by one "notch"
for index in {8..0}
do
        # Make sure it exists before moving it, to avoid errors.
        if [ -e ~/.backup${index} ];
        then
                index2=`expr "$index" + 1`
                mv ~/.backup${index} ~/.backup${index2}
        fi
done


exit

The next step is to make this run whenever a user logs out. However, it is easier to make this run at login than logout. A small difference and mostly unnoticable to the end user, so this is what I went with. To do this, I made a LaunchAgent by putting the following code into a file named com.reviewmynotes.clearLocalFiles.plist located at /Library/LaunchAgents.


<plist version="1.0">
<dict>
        <key>KeepAlive</key>
        <false>

        <key>Label</key>
        <string>org.cairodurham.clearLocalFiles</string>

        <key>LowPriorityIO</key>
        <true>

        <key>ProgramArguments</key>
        <array>
                <string>/usr/local/bin/clear_local_files.sh</string>
        </array>

        <key>RunAtLoad</key>
        <true>

        <key>LimitLoadToSessionType</key>
        <array>
                <string>Aqua</string>
        </array>

</true></true></false></dict>
</plist>

Now logout and login. Anything in the locations listed in clearDirs and not listed in keepDirs should be moved into a hidden folder called .backup1. At each login, that folder will be renamed so the number goes up by one. The folder .backup9 will be deleted each time. This gives you a chance to save people from their own mistakes.

This system can be easily deployed via tools like Munki, Jamf, and FileWave.

Wednesday, February 19, 2020

G Suite Walled Garden for Email

If you're using email in a school, one thing you should consider is blocking outside email messages sent to your students. If you're in the United States, then COPPA applies to any students under 13 years old. For most areas, this means all elementary and middle/junior high school students. Some may think that this should apply to all students. That is a decision for your district leadership team.

This goal is very achievable if you use G Suite.

First, arrange students into OUs by school, grade, and/or year of graduation. Personally, I recommend a nested approach. I place student accounts into an OU for their year of graduation. This is easily changed for the small number of students who are retained each year. Then I place these OUs into OUs for their grade. This means that I can quickly move all students to their new grade. Any grade-level configurations go onto the grade's OU, not the OU for the class-of-####. This reduces the effort when students are promoted to the next grade each year. If your district only has one school for each grade (i.e. only one elementary school, one middle school, and one high school), then you can nest the grades' OUs inside OUs for each school, too. This allows a quick way to apply settings across all grades in a school.

If you don't have students cleanly arranged into OUs yet, you may want to consider using either GAM or Gopher for Users to do this efficiently. When coupled with exports from G Suite and your student information system, these can be very effective tools. I recommend GAM for those with no budget and/or lots of experience with the Linux command line and Gopher for Users for anyone more comfortable with a spreadsheet environment.

Now that you have the ability to "aim" settings at the relevant groupings of student accounts, login to http://admin.google.com and go to Apps, then G Suite, then Gmail, then Advanced Settings. Select the OU to restrict on the left side. Scroll down to "Restrict delivery" under the "Compliance" header.

Hover the pointer over that line and the "Edit" button will appear on the far right. Click on that. New settings will appear. In this space, create a list of whitelisted domains. I called mine "Walled Garden". This list should start small and may have a few things added over time. Add your own domain here, as a precaution. Some websites used with students may require registering for accounts over email. You'll have to add those, too.

This may be obvious, but never add "gmail.com" or "yahoo.com" or other free email services to this list. If you do, it will defeat the purpose of this restriction. That said, I did end up adding "google.com" (not "gmail.com") so that students could receive notices of shared files from Google Drive.

You'll also want to add a rejection notice for email that isn't delivered. This goes in step #2 in the above screenshot. You should also check the box to allow bypassing this restriction for internal messages. Note that this applies for Gmail-to-Gmail messages, but you may have external products that technically aren't "internal," such as copiers that scan-and-email documents. This is why your domain should be in the list in step #1. When done, save your new settings. Then duplicate them for any other OUs that should have them. For easier management, I recommend re-using the same whitelist in each OU. For example, you could apply the settings to "Elementary School" and "Middle School", but use the same "Wall Garden" whitelist for each of them.

These settings now apply to both incoming and outgoing email which involve domains not on your whitelist. Note that external users (e.g. "person@yahoo.com") would receive the customized message from step #2 while internal users (i.e. your users) sending out would simply receive an "undeliverable" notice.

Tuesday, January 8, 2019

Download MacOS 10.12 (a.k.a. Sierra)

Sometimes a systems administrator needs to get specific OS installers, due to compatibility issues. MacOS 10.13 (a.k.a. "High Sierra") introduced a new file system called APFS. Apple also started making firmware updates part of OS updates. These changes can cause significant issues with using imaging tools like Deploy Studio.

Thankfully, the excellent MacOS systems administration blog Krypted.com published a way to download the version right before that. So if you need to set up imaging of Macs, this is the last version you can reliably use.

Use it for now, but start planning a new workflow to maintain your Macs; one that doesn't involve imaging. That isn't supported by Apple any more. For details on that particular challenge, look for presentations by Greg Neagle at the MacSysAdmin conference, such as this one.

Note: If Krypted.com isn't available for some reason, the recommendation was simply to use this link.

Update: If you need a different version, Krypted.com has a newer article that covers a number of other versions.

Thursday, January 18, 2018

Exporting User List from Active Directory

Sometimes you just need a simple file with a list of users in it.

In my case, I've made various programs to streamline and automate the work of my department. We "feed" one of these programs user data from Active Directory and elsewhere so it can make and delete accounts when students transfer in or out of the district.

You may not have a custom system like that, but there are many other reasons to be able to export data from Active Directory into a spreadsheet or text listing. One example would be turning over a list of users to the payroll department, so they can tell you what accounts should have been closed but slipped through the cracks. (Side note: I actually recommend doing at least annually and preferably every three to six months.)

To make such a list, login to a Domain Controller for your Active Directory system as a Domain Admin, run the command line, and use a command like this:


csvde -f ad.txt -n -d "ou=students,ou=People,dc=controller,dc=example,dc=com" -r "(&(objectCategory=person)(objectClass=user))" -l "sAMAccountName,givenName,sn,description"

That was probably too long to fit on the page, so let's break it down.

  • csvde:
    This will make a file in the current directory (a.k.a. folder.) That file is in the CSV format. To remember this command, think of it as as "CSV Data Export."
  • -f ad.txt:
    This file will be named "ad.txt".
  • -n:
    Any binary data is excluded.
  • -d "ou=students,ou=People,dc=controller,dc=example,dc=com"
    It will limit itself to data in the Organizational Unit (OU) named "students", which is inside "People", and in the Active Directory system at controller.example.com.
  • -r "(&(objectCategory=person)(objectClass=user))":
    It will limit the export to only user accounts. For example, if there are computers or groups in that OU, those will not be exported.
  • -l "sAMAccountName,givenName,sn,description":
    Its columns will be the username, the first name, the last name, and the description. Note that the first name is labeled "givenName" and the last name is labeled "sn" as in "surname."

If you want to change the OU, just adjust the part after the -d to include your OU and DC structure. If you want to change the data in the export file, just change the part after the -l. To learn more details, check out Microsoft's article on the csvde command.

If you adjust this to suit your environment, you should be able to generate CSV files that list your users very quickly. At my job, we can export over 1,000 users in under a minute. The CSV file can be read by scripts we write or imported into a Google Sheet and shared with Payroll for a quick account audit.