Thursday, 18 September 2014

XFS on Red Hat Enterprise Linux 6

This is just a quick post. I've used the XFS filesystem on many, many Linux hosts for a great many years now. I've used it on CentOS6 since that OS was first released, and XFS is now the default filesystem for RHEL7.

Yay, XFS Logo

So, I went to make a new XFS filesystem on a new Red Hat Enterprise Linux 6 system in the past week, I more than a little bit surprised to find that XFS is not shipped as part of the base OS. More than that - it is simply not installable/obtainable without paying an extra subscription fee per host/CPU. (How did I not know this after 4+ years of using this OS??). So, I pay for an OS, and consequently I get less than if I had chosen a free equivalent?

At first I thought I was wrong, but no, XFS which is mainline-kernel, the base-default in RHEL7, and available in all downstream free versions of RHEL, is simply not built into the base of RHEL6 (and 5 and below, for that matter).

Or is it?

'locate xfs' shows that the kernel modules are actually shipped within the default RHEL6 kernel. So, in that case, what then does one get for one's per-CPU licence fee? Answer: the filesystem utilities such as mkfs & dumpfs. The base system is more than capable of reading & supporting and XFS filesystem, it just can't make a new one.

So, if you find yourself in such a position, there is a clever way around this. Whilst will doubtless invalidate your RHEL support agreement, which, let's face it, is what you are paying for*, this is quite easy to do, so why not do it?

* Yes, this is clearly a self-defeating argument.

Simply download the latest xfsdump and xfsprogs RPM packages from your friendly local CentOS repository and install them on your shiny newly-invalidated RHEL box, and you can make as many XFS filesystems as you wish!

Other clever ways include:
  • Typing "yum install $URL1 $URL2" for the above two .rpm URLs
  • Typing "yumdownloader xfsdump xfsprogs" on a CentOS6 box and copying the packages across to the RHEL6 machine.
  • Install CentOS
  • Install RHEL7 (or CentOS7)
Depending on what you need the FS created for, you could even choose to remove said packages, and re-validate your RHEL Support.

And if Red Hat asks, I didn't tell you this.

Monday, 15 September 2014

CentOS 6 to CentOS 7: Upgrade of my Desktop

Deciding that the best way to learn a system is to use it, I recently decided to move my primary Desktop system at work from CentOS6 to CentOS7. This is the story of that upgrade.


Running the Upgrade Tool


So, after some planning and system prep work, I ran the CentOS upgrade tool. This caused me many false starts - including the fact that my system had been Oracle Linux at one point in it's life, and the CentOS upgrade tool didn't like the OEL packages. So I tried to change the offending OEL packages to CentOS ones, which included the sterling idea of removing glibc from my system's rpm database [Hint: don't do this, or if you really feel that you have to, do remember to type "--justdb" in the command, unlike me who knew to type it but left it off the actual command I executed, and thus accidentally removed glibc from a running system, which was not the best scenario]). I did discover wonderful commands such as "yum distro-sync" which will prove invaluable in years to come, but was a lot of heartache in between.

After such small starter issues, I got the upgrade tool to recognise my system fully, so I ran the prechecks and then ran the actual upgrade itself... at which point it outright failed. The upgrade tool refused to upgrade, since I had Gnome installed. So, I "yum remove"d GNOME (as per Red Hat KB) and continued.

After The Fall, Comes a Reinstall

So, the upgrade tool dutifully upgraded my system - and left me without a working GDM login screen (which I couldn't fix, since I don't know the inner murky depths of systemd), broken /var/log/ output files, and quite a few more elements that should have worked on a cleanly-installed system.  So, after all of the above travails, I decided to simply reinstall. Noone else on the internet appeared to have my gdm problem, except two others (on Fedora) who also reinstalled after their failed upgrades. It would have saved me many, many hours if I had just done this in the first place.

...Except Now I can't Reinstall Either

So I booted the Install DVD, ran the installer... but this then failed to install on my system.

I hit the issue "you haven't selected a bootable stage 1 partition" in the disk partitioning installer section -- the installer decided that my hard drive needed to be GPT instead of MBR format, but instead of telling me this, it decided to hit me with unrelated errors telling me I had did not have a boot partition (when I did).

See here for resolution for this issue: http://fedoraproject.org/wiki/Common_F20_bugs#UEFI_install_to_ms-dos_.28.27MBR.27.29_labelled_disk_fails_with_unclear_errors

So I had to convert my disk to GPT and re-run the installer. It ran easily after that, it was mostly a boring straightforward affair that someone else can blog about.

I saw someone else at work also hit this issue, but they simply blew the whole disk away and let the installer do it's own thing -- I wanted to do something silly, like keep the existing data I had on the drives without a reformat (yes, I had backups elsewhere, but that's not the point).

So, I finally get to Reinstall... and GNOME needs a lot of help

So much help, that  I posted about it here.

On CentOS6, I used Gnome2 as my primary desktop interface, so Gnome3 seemed like a logical thing to move to. With a decent amount of research and effort, I actually quite like it now. My link shows what I changed to make it feel like home.

Other System Stuff

# Install EPEL
yum install -y epel-release --enablerepo=extras
yum upgrade -y epel-release
# or manually:
yum install http://fedora.mirror.uber.com.au/epel/7/x86_64/e/epel-release-7-1.noarch.rpm


# Install ElRepo (for NVidia kernel)
yum install http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

# Install Chrome (as per http://www.if-not-true-then-false.com/2010/install-google-chrome-with-yum-on-fedora-red-hat-rhel/):
cat << EOF > /etc/yum.repos.d/google-chrome.repo
[google-chrome]
name=google-chrome - \$basearch
baseurl=http://dl.google.com/linux/chrome/rpm/stable/\$basearch
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
EOF
yum install google-chrome-stable



# Install "nux desktop" for vlc
yum install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm

# Install vlc from Nux
yum install -y vlc


# Disable "nux desktop" from being auto-enabled
cd /etc/yum.repos.d/
sed -i.orig 's/enabled=1/enabled=0/' nux-dextop.repo




Nvidia Drivers - The Easy Way!

# Install ElRepo repo above
yum install nvidia-x11-drv nvidia-detect kmod-nvidia
reboot




Gnome 3 on CentOS 7 - How I Made It Lovely and Usable

I generally really liked Gnome2 in RHEL6 - it was stable and worked well, and it's shortcomings had been largely addressed over the years. I promised I wouldn't fall prey to everyone else's griping about GNOME3 - but it's quite hard not to. For example, I have to use the command line to configure many of the GUI settings - Seriously??

I won't whinge too much, I'll just record what I've had to do to make Gnome3 a nice place to be. After a flurry of several days' activities, summarised below, I actually really quite like Gnome 3 now, I just don't understand the defaults and/or design decisions behind them.

Starting out in Gnome 3

This picture does sum up what it first felt like to use Gnome 3 after many years of Gnome 2:
http://i.imgur.com/IIBxZm6.jpg

But what I ended up with is something far more like:

So how did I get to the point of a personal tick of approval?


Install some packages, configure the GUI from the command line:

# Key Gnome Tools: dconf editor, Extensions browser plugin, a menu editor and the all-important Tweak Tool
yum install -y dconf-editor gnome-shell-browser-plugin alacarte gnome-tweak-tool

# Update Firefox to v31.0, updated from v24 since RHEL7 was shipped
yum update -y firefox

# Install Gnome's Epiphany "Web" Browser to browse Gnome Extensions. Only needed if you# Set the screen timeout to 60 minutes, which cannot be done via GUI options
# Configuring a GUI via the command line - seriously?
gsettings set org.gnome.desktop.session idle-delay 1800

# Replace the system Firefox packages with the latest ones from the internet (to run Firefox-current instead of -ESR):
yum install -y http://mirror.internode.on.net/pub/fedora/linux/releases/19/Everything/x86_64/os/Packages/e/epiphany-3.8.2-1.fc19.x86_64.rpm


Install Gnome Extentions:

Open https://extensions.gnome.org in Firefox browser, and install the following extensions, which are essential for desktop usage:
* Activities Configurator (to adjust top-left hot-corner timeout)
* Impatience (to adjust animation speeds)
* Frippery Panel Favourites (to put application-launch icons in the top panel)
* WindowOverlay Icons (Application Icons on each application preview in the Overview overlay)

Optional Extensions, for personal taste:
* Removable Drive Menu (Allows eject of removable devices from top panel)
* Caffeine (adds a button to top panel to disable screensaver/screen-power timeout; useful for a workday)
*  Lock Screen (adds a lock button to top panel, to allow single-click screen lock)

Now open the Gnome GUI Tweak Tool:

* Configure  Shell Extensions/Activities Configurator, adjust HotCorner Sensitivity to 200 (as per http://stevenrosenberg.net/blog/desktops/GNOME/2013_1209_gnome_3_hot_corner_sensitivity)
* Configure Theme: Turn on Dark Theme for all applications
* Configure Shell Extensions/Impatience: Adjust to scale 0.65 (Gnome default is 1.0)
* Configure Fonts: Set Default font to "DejaVu Sans 10"
* Configure Desktop: Set background Picture URI to "Sandstone.jpg" (or something else you like)

Edit the "Favourites" Application List:

This list appears in multiple places, in the same order. This appears as Favourites in the "Applications" menu in the top Panel, and as the icons used in the "Frippery Panel Favourites", and as the menu in the Overview overlay. So, to edit it, use the following steps:

* Press the Windows key on your keyboard (aka Super,  Meta key) to get the the Overview overlay
* Right-click on each app in the left side-menu you don't like & remove it
* Now open the Show Applications (nine white dots) icon
* Right-click on each application icon & select "Add to Favourites"
* Drag the Order  of icons up & up as you please

The order appears in all areas (Panel favourites, Applications->Favourites) which I really like.

Install a Firefox Extension to hide the title bar:

Open Firefox, and install the extension "Htitle" - this hides the top title bar when in full-screen mode, and gives you back quite a bit of screen real estate.

...And You're Done

And after that you have a very lovely, workable Gnomey system!



Bonus Marks: Make the Dark Theme More Pervasive

Ok, this is more personal taste than bonus marks. I definitely prefer the Adwaita Dark Theme for Gnome (which is just a dark version of the default Gnome3 theme), which is quite easy to turn on (in the Gnome Tweak Tool, as listed above).

However, once you enable this, eagle-eyed- (and not-so-eagle-eyed- and even blind-) people will probably notice that some Gnome apps don't look all that Dark when using the Dark theme, and thus look quite out of place. This doesn't make sense, until you know that while many apps are now written in Gnome's windows-drawing library GTK3, some are still using the older GTK2, and the older apps don't utilise the Dark theme. It is also possible for some gtk3 apps to override the dark theme choice, although this is less of an issue than the gtk2 apps.

So, to fix this, we somewhat follow the instructions in this link, albeit reversed (thanks to this answer for pointing me there), and then add gtk-2.0 goodness on top of it all (thanks to this guy for the gtk-2.0 dark theme).

mkdir -p ~/.themes/Adwaita
cp -rp /usr/share/themes/Adwaita/gtk-* ~/.themes/Adwaita
cd ~/.themes/Adwaita/gtk-2.0
wget http://pastebin.com/download.php?i=vbnULbyi -O gtkrc-dark
ln -sf gtkrc-dark ./gtkrc
cd ~/.themes/Adwaita/gtk-3.0
ln -sf gtk-dark.css gtk.css

And also, installing and using the Firefox theme "FT DeepDark" also makes it blend in much better with the Dark theme.

Update: the latest release of Firefox theme DeepDark is no longer compatible with Firefox 31.x - you will need to install an older version. See here for older versions, version 11.1 is still compatible.


Friday, 5 September 2014

Importing a SSL Wildcard Certificate from an Apache Webserver onto a Cisco ASA 5500

I recently needed to use the same wildcard certificate on both a Linux Apache host (Apache 2.2, RHEL6) and a Cisco ASA (5505), and this is how I did it. This blog post starts _after_ I have the certificate generated, signed, installed, working & tested on the Apache host (which was just a standard CSR + install process, documented in thousands of places elsewhere on the web).


Note: This is a direct-copy rip off of another blog post (http://blog.tonns.org/2013/02/importing-ssltls-wildcard-certificate.html) - I don't really add or change much compared to that post (aside from notes on the way), as the steps worked fine for me; I'm just replicating it here for posterity in case that blog goes away.
Here are the steps:

1. Convert all certs and keys to PEM format


    mkdir asa
    openssl x509 -in example_com.crt -out asa/example_com.crt -outform pem
    # See note below re:next step for intermediaries 
    openssl x509 -in geotrust-intermediate-ca.crt -out asa/geotrust-intermediate-ca.crt -outform pem
    openssl rsa -in example_com.key -out asa/example_com.key -outform pem
  

Please note that your certificates may well be in PEM format already - if so, you only need the key conversion step and use the original certificate files.


Please also note that the intermediate-cert step above actually cut the number of chained certificates in my intermediary's cert file, from the original file's 3 chained certs down to 1. This wasn't some kind of clever amalgamation - the command simply only wrote out the first link in the chain. I'm pretty sure this would have been broken if I imported the new file; I didn't investigate this much though, as I realised that the original certs were already in PEM format, so I just deleted the newly-created file and copied the old one in.


2. Now bundle them into PKCS12 format


    cd asa
    openssl pkcs12 -export -in example_com.crt -inkey example_com.key \
        -certfile geotrust-intermediate-ca.crt -out example_com.p12
    # you will need to choose an export password, when prompted

3. Now base64 encode it for the ASA (to paste into terminal window)

    ( echo -----BEGIN PKCS12-----;
      openssl base64 -in example_com.p12;
      echo -----END PKCS12-----; ) > example_com.pkcs12
      cat example_com.pkcs12

4. Import the cert into the ASA terminal via copy/paste from the above cat output

    fw1# conf t
    fw1(config)# crypto ca import example_com-trustpoint pkcs12 {exportPassword}

    Enter the base 64 encoded pkcs12.
    End with the word "quit" on a line by itself:
    -----BEGIN PKCS12-----
    { snip }
    -----END PKCS12-----
    quit
    INFO: Import PKCS12 operation completed successfully
    fw1(config)# exit
    fw1# wr me
    fw1# show crypto ca certificates

4. Enable the trustpoint on the outside interface

    fw1# conf t
    fw1(config)# ssl trust-point example_com-trustpoint outside
    fw1(config)# exit
    fw1# wr me
    fw1# show ssl

5. Bounce the VPN

    fw1# conf t
    fw1(config)# webvpn
    fw1(config-webvpn)# no enable outside
    WARNING: Disabling webvpn removes proxy-bypass settings.
    Do not overwrite the configuration file if you want to keep existing proxy-bypass commands.
    INFO: WebVPN and DTLS are disabled on 'outside'.
    fw1(config-webvpn)# enable outside   
    INFO: WebVPN and DTLS are enabled on 'outside'.
    fw1(config)# exit
    fw1# wr mem



Please note that the method above involves exporting the server's private SSL key as well the certificate - this isn't quite as secure as having individual certificates with individual private keys for each server.

This SSL certificate's licenced rights covered this use-case (not all registrars do), but the registrar's SSL-management web interface provided no actual way to implement this right. This method is therefore not quite as nice as individual certificates, but I had no other choice.

Monday, 11 August 2014

FirewallD: Adding Services and Direct Rules

This post will expand somewhat upon the firewall rules in my RHEL7-install blogpost. I'm trying to make an IPsec connection between two machines (CentOS6 & CentOS7) - I'll detail the IPsec in another post, but this covers adding the FirewallD rules on the CentOS7 box.

I did have quite a few of these commands in my RHEL7 post, but Blogger somehow ate them between Edits on my post.

Anyway, here we go:

Enable IPsec via Standard FirewallD Services

# Is IPsec enabled?
firewall-cmd --zone=public --query-service=ipsec
# No? Then enable it:
firewall-cmd --zone=public --add-service=ipsec
# and next reboot too:
firewall-cmd --permanent --zone=public --add-service=ipsec


Manipulate Direct iptables Rules

Ok, that was easy. Now the hard-bit: rate-limiting inbound new SSH connections, via FirewallD's Direct rules.

There are lots of ways to protect an SSH server on the public internet - move SSH port (PS if you do this in RHEL7, you need to tell SELinux that the port has moved; This appears in the sshd config file) but that is no panacea (I have a high-port-listening SSH server on an IP with no DNS and no internet-advertised services... and it gets hit quite a bit), key-only login (a great idea - password brute force attacks are utterly useless), IPv6 only (good luck connecting to it from everywhere!) and even port-knocking (if that still exists). However, for a bog-standard SSH connection on port 22, another good way is rate-limiting for NEW connections via iptables.


So, now we need to use the Direct interface to iptables. As we can see from the output below, the Direct table rules are interpreted by iptables before the Zone-based stuff.

[21:16][root@host:~]# iptables -nv -L INPUT
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
14535 8172K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
1 240 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
57138 34M INPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
57138 34M INPUT_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
57138 34M INPUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
52 4547 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
56602 34M REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited


So, we add the following direct rules:
firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j LOG --log-prefix "SSH_brute_force "
firewall-cmd --direct --add-rule ipv4 filter INPUT 1 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j DROP
firewall-cmd --direct --add-rule ipv4 filter INPUT 2 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j ACCEPT

And now we remove SSH from the public Zone:
firewall-cmd --zone=public --remove-service=ssh

I seriously suggest that you log back into this server with a new, secondary SSH connection to make sure that you haven't just locked yourself out!
And now feel free to try SSHing into the host 4 times - you will see that your 4th connect is blocked.
Please note that each connection can try multiple passwords, so this doesn't stop password brute-forcing for the first ~12 passwords - combining this with preshared-key-only entry is still the most effective method.

If this is all working for you, remember to run the above commands with a --permanent flag:
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j LOG --log-prefix "SSH_brute_force "
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j DROP
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 2 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j ACCEPT
firewall-cmd --permanent --zone=public --remove-service=ssh



Useful links I found on my travails:

Very good start to FirewallD:
http://ktaraghi.blogspot.com.au/2013/10/what-is-firewalld-and-how-it-works.html

Not actually FirewallD, but Linux kernel rate-limiting - this might be a useful link in the future!:
http://blog.oddbit.com/2011/12/26/simple-rate-limiting/

Friday, 11 July 2014

Red Hat Enterprise 7: This Train Has Now Arrived on Multiple Platforms, All Change

I am just preparing my first Red Hat Enterprise Linux 7 server - installed on Hyper-V, no less. Here is a collection of notes I have made along the way.

Guest VM on Hyper-V (Server 2012 R2)


I've used a Generation 2  VM for my RHEL7 guest - this is supposedly fully supported by both Microsoft and Red Hat, although fairly poorly documented by both parties (admittedly Microsoft's documentation is a little better than RH's, but only is up to RHEL6.5 and not updated for 7 yet).

I had to disable SecureBoot to get the Install DVD to boot, and subsequently keep it off for the installed VM too. Apparently, there is a way to make it work (a colleauge said he found a result on Google, although didn't send the link to me as he said it needed to be done at Installation time, and my server was already installed), but it's not really important.

Integration Services showed as "Degraded - Missing" after I installed the OS. However, despite both vendors saying that RHEL7 was a fully supported guest with Integration Services built-in, Integration Services was clearly broken. The missing major step, that I worked out myself using "yum search", was to instal the meta-package "hyperv-daemons" - I.S. now shows as "Degraded - requiring Update", but at least it shows the IP Address etc - and it adds a VSS integration layer for crash-consistent snapshots!

yum install hyperv-daemons
systemctl enable hypervvssd.service
systemctl enable hypervkvpd.service
systemctl start hypervvssd.service
systemctl start hypervkvpd.service


CPUfreq may or may not be working - certainly the acpi kernel modules do not load (neither auto- nor manually) - but maybe there is power-saving auto-magic elsewhere in the system that I am unaware of. I might do some investigation later, but again I'm not too worried at this point.

Sidenote: Guest on VMWare

VMWare Tools are also now built-in to the OS; install them with:
yum install open-vm-tools
I haven't yet tested this, but at least this step is documented by both RH & VMWare!

RHEL 7 Installation-Process Notes


Although it looks different, and the prompts are in a diiferent order, installation  isn't really any different to any other OS you've ever seen - I just used the install ISO and it installed.

I selected "Autopartition" on a raw 20GB disk image to see what would happen - it gave me the following disk layout:

Partition
/dev/sda1    200M  vfat   /boot/efi
/dev/sda2    500M  xfs    /boot/
<lvm>        19G   xfs    /


Which is pretty much exactly what I wanted for this server.

Minimal Installation

I chose Minimal set of installation packages (my usual choice for servers). I then added the following obviously-missing useful packages:
yum install -y nano bind-utils net-tools telnet ftp mlocate wget at lsof man-pages tcpdump time bash-completion bzip2 yum-utils traceroute rsync strace pm-utils logrotate screen nmap-ncat

For this server, I also pulled in the full Base (a futher ~120 packages), although I probably didn't need to:
yum groupinstall -y base

Red Hat Subscription-Manager Troubles

After installation, I ran the usual:
subscription-manager register --username <rhn_username> --autosubscribe
Which refused to register the host and logged lots of HTTP 502 Errors. I thrashed about for half an hour, to no avail. So, I left it for the night, came back in the morning, only to find that the damn thing worked immediately. Thanks Red Hat, thanks -- I wouldn't have had that issue on CentOS, would I?


Obvious Differences from RHEL6


Service Management - Starting, Stopping, etc

The service management is now different with SystemD:
servicename=<servicename>
systemctl start ${servicename}.service
systemctl stop ${servicename}.service
systemctl status ${servicename}.service
# Enable on boot
systemctl enable ${servicename}.service<
# Disable on boot
systemctl stop ${servicename}.service
# Check boot status
systemctl list-unit-files | grep  ${servicename}

NTP: The Times Are A-Changin'

NTPd is no longer installed in RHEL7 - chrony is the new NTP service.
See my updated NTP-On-Linux blog post for Chrony Setup:
http://itnotesandscribblings.blogspot.com.au/2014/05/ntp-on-linux-linux-host-needs-ntp-set.html

Firewalls: Burn the Old Ways

Gone are the days of /etc/sysconfig/iptables - FirewallD now rules the roost.
I haven't looked in great detail, but I found the following commands very helpful in getting myself set up with a basic single-interface server:

I experienced a serious gotcha when creating custom services - after you copy and edit the new custom file, you need to restart the firewall service. This is not documented in RedHat's Doco. Thanks again, guys.

cp /usr/lib/firewalld/services/http.xml /etc/firewalld/services/squid.xml
nano -w /etc/firewalld/services/squid.xml
firewall-cmd --get-service | grep squid
systemctl restart firewalld.servicefirewall-cmd --get-service | grep squid

No EPEL - Yet

EPEL hasn't yet added non-beta RHEL7 support - watch this space at https://fedoraproject.org/wiki/EPEL.

RHEL7 Links And Resources


Red Hat Documentation (Official)

Quite useful - generally well-written and concise, albeit with occasional missing elements which can really cause an issue.

Overall Documentation:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/index.html

Basic Administration:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/part-Basic_System_Configuration.html

Firewall Information:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html

Other Useful Links:

Decent Overviews of firewallD:
http://www.certdepot.net/rhel7-get-started-firewalld/

Adding permanent Rules to FirewallD:
http://blog.christophersmart.com/2014/01/15/add-permanent-rules-to-firewalld/

Thursday, 26 June 2014

Powershell: OS Detection

What OS am I executing on? What bitness? Some Handy Functions to Use


Sometimes, it's important to know what OS you are running on, and/or how many bits that OS has. Here are some useful functions which can be reused in other Powershell scripts (a future post may include putting such things into a Powershell Module). Please note that this hasn't been 100% tested on all of the OSes identified, but should work*.

The other two functions are about detecting bitness - Get-OSBitness() will tell you if you are on a 64-bit or 32-bit OS, and Get-CurrentProcessBitness() will tell you what the current Powershell execution-engine is (ie you can detect if you are running on the 32-bit powershell.exe on a 64-bit OS). I can't really image G-CPB() being used much, but here it is anyway.

* Please also note that the Win8.1/2012R2 detection is known to be sometimes incorrect, and these OSes can instead show up as Win8/2012 (respectively); this is because Microsoft broke the detection mechanism in these OSes, and each application now must use a manifest.xml file to flag itself as a Win8.1/2012R2 app (as ooposed to a legacy <= Win8) - I'm pretty sure Powershell.exe is properly manifested and should detect as Win8.1, but the default Powershell ISE is not (at this current time) and will show Win8.



Function Get-OSVersion() {
    # Version numbers as per http://www.gaijin.at/en/lstwinver.php
    $osVersion = "Version not listed"
    $os = (Get-WmiObject -class Win32_OperatingSystem)
    Switch (($os.Version).Substring(0,3)) {
        "5.1" { $osVersion = "XP" }
        "5.2" { $osVersion = "2003" }
        "6.0" { If ($os.ProductType -eq 1) { $osVersion = "Vista" } Else { $osVersion = "2008" } }
        "6.1" { If ($os.ProductType -eq 1) { $osVersion = "7" } Else { $osVersion = "2008R2" } }
        "6.2" { If ($os.ProductType -eq 1) { $osVersion = "8" } Else { $osVersion = "2012" } }
        # 8.1/2012R2 version detection can be broken, and show up as "6.2", as per http://www.sapien.com/blog/2014/04/02/microsoft-windows-8-1-breaks-version-api/
        "6.3" { If ($os.ProductType -eq 1) { $osVersion = "8.1" } Else { $osVersion = "2012R2" } }
    }
    return $osVersion
}


Function Get-CurrentProcessBitness() {
    # This function finds the bitness of the powershell.exe process itself (ie can detect 32-bit powershell.exe on a win64)
    $thisProcessBitness = 0
    switch ([IntPtr]::Size) {
        "4" { $thisProcessBitness = 32 }
        "8" { $thisProcessBitness = 64 }
    }
    return $thisProcessBitness
}

Function Get-OSBitness() {
    # This function finds the bitness of the OS itself (ie will detect 64-bit even if you're somehow using 32-bit powershell.exe)
    $OSBitness = 0
    switch ((Get-WmiObject Win32_OperatingSystem).OSArchitecture) {
        "32-bit" { $OSBitness = 32 }
        "64-bit" { $OSBitness = 64 }
    }
    return $OSBitness
}