Thursday, 18 September 2014

XFS on Red Hat Enterprise Linux 6

This is just a quick post. I've used the XFS filesystem on many, many Linux hosts for a great many years now. I've used it on CentOS6 since that OS was first released, and XFS is now the default filesystem for RHEL7.

Yay, XFS Logo

So, I went to make a new XFS filesystem on a new Red Hat Enterprise Linux 6 system in the past week, I more than a little bit surprised to find that XFS is not shipped as part of the base OS. More than that - it is simply not installable/obtainable without paying an extra subscription fee per host/CPU. (How did I not know this after 4+ years of using this OS??). So, I pay for an OS, and consequently I get less than if I had chosen a free equivalent?

At first I thought I was wrong, but no, XFS which is mainline-kernel, the base-default in RHEL7, and available in all downstream free versions of RHEL, is simply not built into the base of RHEL6 (and 5 and below, for that matter).

Or is it?

'locate xfs' shows that the kernel modules are actually shipped within the default RHEL6 kernel. So, in that case, what then does one get for one's per-CPU licence fee? Answer: the filesystem utilities such as mkfs & dumpfs. The base system is more than capable of reading & supporting and XFS filesystem, it just can't make a new one.

So, if you find yourself in such a position, there is a clever way around this. Whilst will doubtless invalidate your RHEL support agreement, which, let's face it, is what you are paying for*, this is quite easy to do, so why not do it?

* Yes, this is clearly a self-defeating argument.

Simply download the latest xfsdump and xfsprogs RPM packages from your friendly local CentOS repository and install them on your shiny newly-invalidated RHEL box, and you can make as many XFS filesystems as you wish!

Other clever ways include:
  • Typing "yum install $URL1 $URL2" for the above two .rpm URLs
  • Typing "yumdownloader xfsdump xfsprogs" on a CentOS6 box and copying the packages across to the RHEL6 machine.
  • Install CentOS
  • Install RHEL7 (or CentOS7)
Depending on what you need the FS created for, you could even choose to remove said packages, and re-validate your RHEL Support.

And if Red Hat asks, I didn't tell you this.

Monday, 15 September 2014

CentOS 6 to CentOS 7: Upgrade of my Desktop

Deciding that the best way to learn a system is to use it, I recently decided to move my primary Desktop system at work from CentOS6 to CentOS7. This is the story of that upgrade.


Running the Upgrade Tool


So, after some planning and system prep work, I ran the CentOS upgrade tool. This caused me many false starts - including the fact that my system had been Oracle Linux at one point in it's life, and the CentOS upgrade tool didn't like the OEL packages. So I tried to change the offending OEL packages to CentOS ones, which included the sterling idea of removing glibc from my system's rpm database [Hint: don't do this, or if you really feel that you have to, do remember to type "--justdb" in the command, unlike me who knew to type it but left it off the actual command I executed, and thus accidentally removed glibc from a running system, which was not the best scenario]). I did discover wonderful commands such as "yum distro-sync" which will prove invaluable in years to come, but was a lot of heartache in between.

After such small starter issues, I got the upgrade tool to recognise my system fully, so I ran the prechecks and then ran the actual upgrade itself... at which point it outright failed. The upgrade tool refused to upgrade, since I had Gnome installed. So, I "yum remove"d GNOME (as per Red Hat KB) and continued.

After The Fall, Comes a Reinstall

So, the upgrade tool dutifully upgraded my system - and left me without a working GDM login screen (which I couldn't fix, since I don't know the inner murky depths of systemd), broken /var/log/ output files, and quite a few more elements that should have worked on a cleanly-installed system.  So, after all of the above travails, I decided to simply reinstall. Noone else on the internet appeared to have my gdm problem, except two others (on Fedora) who also reinstalled after their failed upgrades. It would have saved me many, many hours if I had just done this in the first place.

...Except Now I can't Reinstall Either

So I booted the Install DVD, ran the installer... but this then failed to install on my system.

I hit the issue "you haven't selected a bootable stage 1 partition" in the disk partitioning installer section -- the installer decided that my hard drive needed to be GPT instead of MBR format, but instead of telling me this, it decided to hit me with unrelated errors telling me I had did not have a boot partition (when I did).

See here for resolution for this issue: http://fedoraproject.org/wiki/Common_F20_bugs#UEFI_install_to_ms-dos_.28.27MBR.27.29_labelled_disk_fails_with_unclear_errors

So I had to convert my disk to GPT and re-run the installer. It ran easily after that, it was mostly a boring straightforward affair that someone else can blog about.

I saw someone else at work also hit this issue, but they simply blew the whole disk away and let the installer do it's own thing -- I wanted to do something silly, like keep the existing data I had on the drives without a reformat (yes, I had backups elsewhere, but that's not the point).

So, I finally get to Reinstall... and GNOME needs a lot of help

So much help, that  I posted about it here.

On CentOS6, I used Gnome2 as my primary desktop interface, so Gnome3 seemed like a logical thing to move to. With a decent amount of research and effort, I actually quite like it now. My link shows what I changed to make it feel like home.

Other System Stuff

# Install EPEL
yum install -y epel-release --enablerepo=extras
yum upgrade -y epel-release
# or manually:
yum install http://fedora.mirror.uber.com.au/epel/7/x86_64/e/epel-release-7-1.noarch.rpm


# Install ElRepo (for NVidia kernel)
yum install http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

# Install Chrome (as per http://www.if-not-true-then-false.com/2010/install-google-chrome-with-yum-on-fedora-red-hat-rhel/):
cat << EOF > /etc/yum.repos.d/google-chrome.repo
[google-chrome]
name=google-chrome - \$basearch
baseurl=http://dl.google.com/linux/chrome/rpm/stable/\$basearch
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
EOF
yum install google-chrome-stable



# Install "nux desktop" for vlc
yum install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm

# Install vlc from Nux
yum install -y vlc


# Disable "nux desktop" from being auto-enabled
cd /etc/yum.repos.d/
sed -i.orig 's/enabled=1/enabled=0/' nux-dextop.repo




Nvidia Drivers - The Easy Way!

# Install ElRepo repo above
yum install nvidia-x11-drv nvidia-detect kmod-nvidia
reboot




Gnome 3 on CentOS 7 - How I Made It Lovely and Usable

I generally really liked Gnome2 in RHEL6 - it was stable and worked well, and it's shortcomings had been largely addressed over the years. I promised I wouldn't fall prey to everyone else's griping about GNOME3 - but it's quite hard not to. For example, I have to use the command line to configure many of the GUI settings - Seriously??

I won't whinge too much, I'll just record what I've had to do to make Gnome3 a nice place to be. After a flurry of several days' activities, summarised below, I actually really quite like Gnome 3 now, I just don't understand the defaults and/or design decisions behind them.

Starting out in Gnome 3

This picture does sum up what it first felt like to use Gnome 3 after many years of Gnome 2:
http://i.imgur.com/IIBxZm6.jpg

But what I ended up with is something far more like:

So how did I get to the point of a personal tick of approval?


Install some packages, configure the GUI from the command line:

# Key Gnome Tools: dconf editor, Extensions browser plugin, a menu editor and the all-important Tweak Tool
yum install -y dconf-editor gnome-shell-browser-plugin alacarte gnome-tweak-tool

# Update Firefox to v31.0, updated from v24 since RHEL7 was shipped
yum update -y firefox

# Install Gnome's Epiphany "Web" Browser to browse Gnome Extensions. Only needed if you# Set the screen timeout to 60 minutes, which cannot be done via GUI options
# Configuring a GUI via the command line - seriously?
gsettings set org.gnome.desktop.session idle-delay 1800

# Replace the system Firefox packages with the latest ones from the internet (to run Firefox-current instead of -ESR):
yum install -y http://mirror.internode.on.net/pub/fedora/linux/releases/19/Everything/x86_64/os/Packages/e/epiphany-3.8.2-1.fc19.x86_64.rpm


Install Gnome Extentions:

Open https://extensions.gnome.org in Firefox browser, and install the following extensions, which are essential for desktop usage:
* Activities Configurator (to adjust top-left hot-corner timeout)
* Impatience (to adjust animation speeds)
* Frippery Panel Favourites (to put application-launch icons in the top panel)
* WindowOverlay Icons (Application Icons on each application preview in the Overview overlay)

Optional Extensions, for personal taste:
* Removable Drive Menu (Allows eject of removable devices from top panel)
* Caffeine (adds a button to top panel to disable screensaver/screen-power timeout; useful for a workday)
*  Lock Screen (adds a lock button to top panel, to allow single-click screen lock)

Now open the Gnome GUI Tweak Tool:

* Configure  Shell Extensions/Activities Configurator, adjust HotCorner Sensitivity to 200 (as per http://stevenrosenberg.net/blog/desktops/GNOME/2013_1209_gnome_3_hot_corner_sensitivity)
* Configure Theme: Turn on Dark Theme for all applications
* Configure Shell Extensions/Impatience: Adjust to scale 0.65 (Gnome default is 1.0)
* Configure Fonts: Set Default font to "DejaVu Sans 10"
* Configure Desktop: Set background Picture URI to "Sandstone.jpg" (or something else you like)

Edit the "Favourites" Application List:

This list appears in multiple places, in the same order. This appears as Favourites in the "Applications" menu in the top Panel, and as the icons used in the "Frippery Panel Favourites", and as the menu in the Overview overlay. So, to edit it, use the following steps:

* Press the Windows key on your keyboard (aka Super,  Meta key) to get the the Overview overlay
* Right-click on each app in the left side-menu you don't like & remove it
* Now open the Show Applications (nine white dots) icon
* Right-click on each application icon & select "Add to Favourites"
* Drag the Order  of icons up & up as you please

The order appears in all areas (Panel favourites, Applications->Favourites) which I really like.

Install a Firefox Extension to hide the title bar:

Open Firefox, and install the extension "Htitle" - this hides the top title bar when in full-screen mode, and gives you back quite a bit of screen real estate.

...And You're Done

And after that you have a very lovely, workable Gnomey system!



Bonus Marks: Make the Dark Theme More Pervasive

Ok, this is more personal taste than bonus marks. I definitely prefer the Adwaita Dark Theme for Gnome (which is just a dark version of the default Gnome3 theme), which is quite easy to turn on (in the Gnome Tweak Tool, as listed above).

However, once you enable this, eagle-eyed- (and not-so-eagle-eyed- and even blind-) people will probably notice that some Gnome apps don't look all that Dark when using the Dark theme, and thus look quite out of place. This doesn't make sense, until you know that while many apps are now written in Gnome's windows-drawing library GTK3, some are still using the older GTK2, and the older apps don't utilise the Dark theme. It is also possible for some gtk3 apps to override the dark theme choice, although this is less of an issue than the gtk2 apps.

So, to fix this, we somewhat follow the instructions in this link, albeit reversed (thanks to this answer for pointing me there), and then add gtk-2.0 goodness on top of it all (thanks to this guy for the gtk-2.0 dark theme).

mkdir -p ~/.themes/Adwaita
cp -rp /usr/share/themes/Adwaita/gtk-* ~/.themes/Adwaita
cd ~/.themes/Adwaita/gtk-2.0
wget http://pastebin.com/download.php?i=vbnULbyi -O gtkrc-dark
ln -sf gtkrc-dark ./gtkrc
cd ~/.themes/Adwaita/gtk-3.0
ln -sf gtk-dark.css gtk.css

And also, installing and using the Firefox theme "FT DeepDark" also makes it blend in much better with the Dark theme.

Update: the latest release of Firefox theme DeepDark is no longer compatible with Firefox 31.x - you will need to install an older version. See here for older versions, version 11.1 is still compatible.


Friday, 5 September 2014

Importing a SSL Wildcard Certificate from an Apache Webserver onto a Cisco ASA 5500

I recently needed to use the same wildcard certificate on both a Linux Apache host (Apache 2.2, RHEL6) and a Cisco ASA (5505), and this is how I did it. This blog post starts _after_ I have the certificate generated, signed, installed, working & tested on the Apache host (which was just a standard CSR + install process, documented in thousands of places elsewhere on the web).


Note: This is a direct-copy rip off of another blog post (http://blog.tonns.org/2013/02/importing-ssltls-wildcard-certificate.html) - I don't really add or change much compared to that post (aside from notes on the way), as the steps worked fine for me; I'm just replicating it here for posterity in case that blog goes away.
Here are the steps:

1. Convert all certs and keys to PEM format


    mkdir asa
    openssl x509 -in example_com.crt -out asa/example_com.crt -outform pem
    # See note below re:next step for intermediaries 
    openssl x509 -in geotrust-intermediate-ca.crt -out asa/geotrust-intermediate-ca.crt -outform pem
    openssl rsa -in example_com.key -out asa/example_com.key -outform pem
  

Please note that your certificates may well be in PEM format already - if so, you only need the key conversion step and use the original certificate files.


Please also note that the intermediate-cert step above actually cut the number of chained certificates in my intermediary's cert file, from the original file's 3 chained certs down to 1. This wasn't some kind of clever amalgamation - the command simply only wrote out the first link in the chain. I'm pretty sure this would have been broken if I imported the new file; I didn't investigate this much though, as I realised that the original certs were already in PEM format, so I just deleted the newly-created file and copied the old one in.


2. Now bundle them into PKCS12 format


    cd asa
    openssl pkcs12 -export -in example_com.crt -inkey example_com.key \
        -certfile geotrust-intermediate-ca.crt -out example_com.p12
    # you will need to choose an export password, when prompted

3. Now base64 encode it for the ASA (to paste into terminal window)

    ( echo -----BEGIN PKCS12-----;
      openssl base64 -in example_com.p12;
      echo -----END PKCS12-----; ) > example_com.pkcs12
      cat example_com.pkcs12

4. Import the cert into the ASA terminal via copy/paste from the above cat output

    fw1# conf t
    fw1(config)# crypto ca import example_com-trustpoint pkcs12 {exportPassword}

    Enter the base 64 encoded pkcs12.
    End with the word "quit" on a line by itself:
    -----BEGIN PKCS12-----
    { snip }
    -----END PKCS12-----
    quit
    INFO: Import PKCS12 operation completed successfully
    fw1(config)# exit
    fw1# wr me
    fw1# show crypto ca certificates

4. Enable the trustpoint on the outside interface

    fw1# conf t
    fw1(config)# ssl trust-point example_com-trustpoint outside
    fw1(config)# exit
    fw1# wr me
    fw1# show ssl

5. Bounce the VPN

    fw1# conf t
    fw1(config)# webvpn
    fw1(config-webvpn)# no enable outside
    WARNING: Disabling webvpn removes proxy-bypass settings.
    Do not overwrite the configuration file if you want to keep existing proxy-bypass commands.
    INFO: WebVPN and DTLS are disabled on 'outside'.
    fw1(config-webvpn)# enable outside   
    INFO: WebVPN and DTLS are enabled on 'outside'.
    fw1(config)# exit
    fw1# wr mem



Please note that the method above involves exporting the server's private SSL key as well the certificate - this isn't quite as secure as having individual certificates with individual private keys for each server.

This SSL certificate's licenced rights covered this use-case (not all registrars do), but the registrar's SSL-management web interface provided no actual way to implement this right. This method is therefore not quite as nice as individual certificates, but I had no other choice.

Monday, 11 August 2014

FirewallD: Adding Services and Direct Rules

This post will expand somewhat upon the firewall rules in my RHEL7-install blogpost. I'm trying to make an IPsec connection between two machines (CentOS6 & CentOS7) - I'll detail the IPsec in another post, but this covers adding the FirewallD rules on the CentOS7 box.

I did have quite a few of these commands in my RHEL7 post, but Blogger somehow ate them between Edits on my post.

Anyway, here we go:

Enable IPsec via Standard FirewallD Services

# Is IPsec enabled?
firewall-cmd --zone=public --query-service=ipsec
# No? Then enable it:
firewall-cmd --zone=public --add-service=ipsec
# and next reboot too:
firewall-cmd --permanent --zone=public --add-service=ipsec


Manipulate Direct iptables Rules

Ok, that was easy. Now the hard-bit: rate-limiting inbound new SSH connections, via FirewallD's Direct rules.

There are lots of ways to protect an SSH server on the public internet - move SSH port (PS if you do this in RHEL7, you need to tell SELinux that the port has moved; This appears in the sshd config file) but that is no panacea (I have a high-port-listening SSH server on an IP with no DNS and no internet-advertised services... and it gets hit quite a bit), key-only login (a great idea - password brute force attacks are utterly useless), IPv6 only (good luck connecting to it from everywhere!) and even port-knocking (if that still exists). However, for a bog-standard SSH connection on port 22, another good way is rate-limiting for NEW connections via iptables.


So, now we need to use the Direct interface to iptables. As we can see from the output below, the Direct table rules are interpreted by iptables before the Zone-based stuff.

[21:16][root@host:~]# iptables -nv -L INPUT
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
14535 8172K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
1 240 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
57138 34M INPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
57138 34M INPUT_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
57138 34M INPUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
52 4547 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
56602 34M REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited


So, we add the following direct rules:
firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j LOG --log-prefix "SSH_brute_force "
firewall-cmd --direct --add-rule ipv4 filter INPUT 1 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j DROP
firewall-cmd --direct --add-rule ipv4 filter INPUT 2 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j ACCEPT

And now we remove SSH from the public Zone:
firewall-cmd --zone=public --remove-service=ssh

I seriously suggest that you log back into this server with a new, secondary SSH connection to make sure that you haven't just locked yourself out!
And now feel free to try SSHing into the host 4 times - you will see that your 4th connect is blocked.
Please note that each connection can try multiple passwords, so this doesn't stop password brute-forcing for the first ~12 passwords - combining this with preshared-key-only entry is still the most effective method.

If this is all working for you, remember to run the above commands with a --permanent flag:
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j LOG --log-prefix "SSH_brute_force "
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 3 --rttl --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j DROP
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 2 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -m comment --comment "SSH Brute-force protection" -j ACCEPT
firewall-cmd --permanent --zone=public --remove-service=ssh



Useful links I found on my travails:

Very good start to FirewallD:
http://ktaraghi.blogspot.com.au/2013/10/what-is-firewalld-and-how-it-works.html

Not actually FirewallD, but Linux kernel rate-limiting - this might be a useful link in the future!:
http://blog.oddbit.com/2011/12/26/simple-rate-limiting/

Friday, 11 July 2014

Red Hat Enterprise 7: This Train Has Now Arrived on Multiple Platforms, All Change

I am just preparing my first Red Hat Enterprise Linux 7 server - installed on Hyper-V, no less. Here is a collection of notes I have made along the way.

Guest VM on Hyper-V (Server 2012 R2)


I've used a Generation 2  VM for my RHEL7 guest - this is supposedly fully supported by both Microsoft and Red Hat, although fairly poorly documented by both parties (admittedly Microsoft's documentation is a little better than RH's, but only is up to RHEL6.5 and not updated for 7 yet).

I had to disable SecureBoot to get the Install DVD to boot, and subsequently keep it off for the installed VM too. Apparently, there is a way to make it work (a colleauge said he found a result on Google, although didn't send the link to me as he said it needed to be done at Installation time, and my server was already installed), but it's not really important.

Integration Services showed as "Degraded - Missing" after I installed the OS. However, despite both vendors saying that RHEL7 was a fully supported guest with Integration Services built-in, Integration Services was clearly broken. The missing major step, that I worked out myself using "yum search", was to instal the meta-package "hyperv-daemons" - I.S. now shows as "Degraded - requiring Update", but at least it shows the IP Address etc - and it adds a VSS integration layer for crash-consistent snapshots!

yum install hyperv-daemons
systemctl enable hypervvssd.service
systemctl enable hypervkvpd.service
systemctl start hypervvssd.service
systemctl start hypervkvpd.service


CPUfreq may or may not be working - certainly the acpi kernel modules do not load (neither auto- nor manually) - but maybe there is power-saving auto-magic elsewhere in the system that I am unaware of. I might do some investigation later, but again I'm not too worried at this point.

Sidenote: Guest on VMWare

VMWare Tools are also now built-in to the OS; install them with:
yum install open-vm-tools
I haven't yet tested this, but at least this step is documented by both RH & VMWare!

RHEL 7 Installation-Process Notes


Although it looks different, and the prompts are in a diiferent order, installation  isn't really any different to any other OS you've ever seen - I just used the install ISO and it installed.

I selected "Autopartition" on a raw 20GB disk image to see what would happen - it gave me the following disk layout:

Partition
/dev/sda1    200M  vfat   /boot/efi
/dev/sda2    500M  xfs    /boot/
<lvm>        19G   xfs    /


Which is pretty much exactly what I wanted for this server.

Minimal Installation

I chose Minimal set of installation packages (my usual choice for servers). I then added the following obviously-missing useful packages:
yum install -y nano bind-utils net-tools telnet ftp mlocate wget at lsof man-pages tcpdump time bash-completion bzip2 yum-utils traceroute rsync strace pm-utils logrotate screen nmap-ncat

For this server, I also pulled in the full Base (a futher ~120 packages), although I probably didn't need to:
yum groupinstall -y base

Red Hat Subscription-Manager Troubles

After installation, I ran the usual:
subscription-manager register --username <rhn_username> --autosubscribe
Which refused to register the host and logged lots of HTTP 502 Errors. I thrashed about for half an hour, to no avail. So, I left it for the night, came back in the morning, only to find that the damn thing worked immediately. Thanks Red Hat, thanks -- I wouldn't have had that issue on CentOS, would I?


Obvious Differences from RHEL6


Service Management - Starting, Stopping, etc

The service management is now different with SystemD:
servicename=<servicename>
systemctl start ${servicename}.service
systemctl stop ${servicename}.service
systemctl status ${servicename}.service
# Enable on boot
systemctl enable ${servicename}.service<
# Disable on boot
systemctl stop ${servicename}.service
# Check boot status
systemctl list-unit-files | grep  ${servicename}

NTP: The Times Are A-Changin'

NTPd is no longer installed in RHEL7 - chrony is the new NTP service.
See my updated NTP-On-Linux blog post for Chrony Setup:
http://itnotesandscribblings.blogspot.com.au/2014/05/ntp-on-linux-linux-host-needs-ntp-set.html

Firewalls: Burn the Old Ways

Gone are the days of /etc/sysconfig/iptables - FirewallD now rules the roost.
I haven't looked in great detail, but I found the following commands very helpful in getting myself set up with a basic single-interface server:

I experienced a serious gotcha when creating custom services - after you copy and edit the new custom file, you need to restart the firewall service. This is not documented in RedHat's Doco. Thanks again, guys.

cp /usr/lib/firewalld/services/http.xml /etc/firewalld/services/squid.xml
nano -w /etc/firewalld/services/squid.xml
firewall-cmd --get-service | grep squid
systemctl restart firewalld.servicefirewall-cmd --get-service | grep squid

No EPEL - Yet

EPEL hasn't yet added non-beta RHEL7 support - watch this space at https://fedoraproject.org/wiki/EPEL.

RHEL7 Links And Resources


Red Hat Documentation (Official)

Quite useful - generally well-written and concise, albeit with occasional missing elements which can really cause an issue.

Overall Documentation:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/index.html

Basic Administration:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/part-Basic_System_Configuration.html

Firewall Information:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html

Other Useful Links:

Decent Overviews of firewallD:
http://www.certdepot.net/rhel7-get-started-firewalld/

Adding permanent Rules to FirewallD:
http://blog.christophersmart.com/2014/01/15/add-permanent-rules-to-firewalld/

Thursday, 26 June 2014

Powershell: OS Detection

What OS am I executing on? What bitness? Some Handy Functions to Use


Sometimes, it's important to know what OS you are running on, and/or how many bits that OS has. Here are some useful functions which can be reused in other Powershell scripts (a future post may include putting such things into a Powershell Module). Please note that this hasn't been 100% tested on all of the OSes identified, but should work*.

The other two functions are about detecting bitness - Get-OSBitness() will tell you if you are on a 64-bit or 32-bit OS, and Get-CurrentProcessBitness() will tell you what the current Powershell execution-engine is (ie you can detect if you are running on the 32-bit powershell.exe on a 64-bit OS). I can't really image G-CPB() being used much, but here it is anyway.

* Please also note that the Win8.1/2012R2 detection is known to be sometimes incorrect, and these OSes can instead show up as Win8/2012 (respectively); this is because Microsoft broke the detection mechanism in these OSes, and each application now must use a manifest.xml file to flag itself as a Win8.1/2012R2 app (as ooposed to a legacy <= Win8) - I'm pretty sure Powershell.exe is properly manifested and should detect as Win8.1, but the default Powershell ISE is not (at this current time) and will show Win8.



Function Get-OSVersion() {
    # Version numbers as per http://www.gaijin.at/en/lstwinver.php
    $osVersion = "Version not listed"
    $os = (Get-WmiObject -class Win32_OperatingSystem)
    Switch (($os.Version).Substring(0,3)) {
        "5.1" { $osVersion = "XP" }
        "5.2" { $osVersion = "2003" }
        "6.0" { If ($os.ProductType -eq 1) { $osVersion = "Vista" } Else { $osVersion = "2008" } }
        "6.1" { If ($os.ProductType -eq 1) { $osVersion = "7" } Else { $osVersion = "2008R2" } }
        "6.2" { If ($os.ProductType -eq 1) { $osVersion = "8" } Else { $osVersion = "2012" } }
        # 8.1/2012R2 version detection can be broken, and show up as "6.2", as per http://www.sapien.com/blog/2014/04/02/microsoft-windows-8-1-breaks-version-api/
        "6.3" { If ($os.ProductType -eq 1) { $osVersion = "8.1" } Else { $osVersion = "2012R2" } }
    }
    return $osVersion
}


Function Get-CurrentProcessBitness() {
    # This function finds the bitness of the powershell.exe process itself (ie can detect 32-bit powershell.exe on a win64)
    $thisProcessBitness = 0
    switch ([IntPtr]::Size) {
        "4" { $thisProcessBitness = 32 }
        "8" { $thisProcessBitness = 64 }
    }
    return $thisProcessBitness
}

Function Get-OSBitness() {
    # This function finds the bitness of the OS itself (ie will detect 64-bit even if you're somehow using 32-bit powershell.exe)
    $OSBitness = 0
    switch ((Get-WmiObject Win32_OperatingSystem).OSArchitecture) {
        "32-bit" { $OSBitness = 32 }
        "64-bit" { $OSBitness = 64 }
    }
    return $OSBitness
}


Tuesday, 24 June 2014

Windows Update Client - Useful Commands


Windows Update is a key tool in diagnosing many Windows-related problems. It's handy to know a few shortcuts to assist in diagnosis.

GUI

There is a basic GUI built into Windows (since Vista). This can be instantly accessed by typing the command  "wuapp", which works if you type this word into a Cmd window, into the Win7 Start Menu and also into the Windows 8/8.1/2012 Metro interface Start Menu.

This GUI will tell you if Updates are working, and show you a history of installed updates.

Command Line Client

The following commands will force the local client to do things:
REM Check for new updates:
wuauclt /detectnow
REM Install any new pending updates (warning, this may reboot your machine)
wuauclt /updatenow
REM NB: above command doesn't seem to work properly any more on Server 2012

REM Forec a Report into WSUS (may be useful if client says it is updated in WUapp, but WSUS says it is still missing patches):
wuauclt /reportnow
REM Force a machine to reregister with its WSUS server (not usually that useful):
wuauclt /resetauthorization /detectnow

Restart the Windows Service

Quite a few problems are resovled by restarting the WU service on a client. The easiest way to restart the service is via the command line:
net stop wuauserv
net start wuauserv

Key Registry Locations

Check the following keys for information:
General Settings will show you a few items, including NextDetectionTime (the Policy key is usually more useful though):
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate

 (Group) Policy Settings include which (if any) local WSUS server you are using, and what the scheduled settings are:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate

Log File

Log file is located here: %windir%\WindowsUpdate.Log
I won't go through it much, but the log file often contains a lot of errors which aren't neccessarily relevant. The important bits are:
  • Can the WU Service on the machine access the WSUS server and/or internet to check-for and download patches?
  • Is there a problematic update which is failing to apply?
  • Are there other obvious errors?

Web Proxy Issues

Often, downloading issues are caused by a web proxy. Check and/or Reset the Machine's proxy settings (which are different to the logged in users' settings) with the following script (written in "Batch" to work across the whole gamut of OSes). Please note I haven't needed to try this on Server 2012/Win8 yet, so it may have changed.

@setlocal
@echo off

REM This Script resets the machine-level proxy config to autodetect & autoscript.

REM We need bitsadmin to do this - which is built into =>2008/Vista+, but not =<XP/2003
SET bitsadmin=bitsadmin
if not exist %windir%\system32\bitsadmin.exe SET bitsadmin="%~dp02003_bitsadmin_x86.exe"
REM Please note the above line probably doesn't cover 64-bit 2003...

Echo **
Echo ** Output current proxies to the screen (in case someone is watching this)
Echo **
%bitsadmin% /util /getieproxy networkservice 2>/nul
%bitsadmin% /util /getieproxy localservice 2>/nul
%bitsadmin% /util /getieproxy localsystem 2>/nul


Echo **
Echo ** Reset the proxy config on the machine to autodetect/Autoscript
Echo **
proxycfg -d 2>/nul
netsh winhttp reset proxy 2>/nul

%bitsadmin% /util /setieproxy networkservice NO_PROXY 2>/nul
%bitsadmin% /util /setieproxy localservice NO_PROXY 2>/nul
%bitsadmin% /util /setieproxy localsystem NO_PROXY 2>/nul

%bitsadmin% /util /setieproxy networkservice AUTODETECT 2>/nul
%bitsadmin% /util /setieproxy localservice AUTODETECT 2>/nul
%bitsadmin% /util /setieproxy localsystem AUTODETECT 2>/nul

%bitsadmin% /util /setieproxy networkservice AUTOSCRIPT http://wpad/wpad.dat 2>/nul
%bitsadmin% /util /setieproxy localservice AUTOSCRIPT http://wpad/wpad.dat 2>/nul
%bitsadmin% /util /setieproxy localsystem AUTOSCRIPT http://wpad/wpad.dat 2>/nul



Last Resort: Reset the WU Service

The following commands perform a full reset of the WU client-side stuff. Try not to do this unless you know that your machine is quite busted.
This works on all OSes 2012R2 and below, and mostly works on XP/2003, but you need a copy of bistadmin.exe from the 2003R2 Tools (or Resource Kit, I forget).  This is written in "Batch" to work across the whole gamut of OSes.

echo ********************************************************
echo ** Now resetting Windows Update Services on this machine

echo ********************************************************

REM Set the date format for later use. Please note this is __highly__ Locale Dependent, for non-Australian machines "old" is used instead.
FOR %%A IN (%Date%) DO (
    FOR /F "tokens=1-3 delims=/-" %%B in ("%%~A") DO (
        SET ISODATE=%%D%%B%%C
    )
)
SET ISODATE=%ISODATE:~0,8%
echo %ISODATE% | findstr /r "^[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]" >NUL 2>&1
IF ERRORLEVEL 1 SET ISODATE=Old

echo ********************************************************echo ** We start by stopping the services
echo ********************************************************
sc config wuauserv start= disabled
net stop wuauserv

echo.
echo ********************************************************echo ** Now we reset all BITS downloads and stop BITS serviceecho ********************************************************SET bitsadmin="bitsadmin"
if not exist "%windir%\system32\bitsadmin.exe" SET bitsadmin="%~dp02003_bitsadmin_x86.exe"
%Bitsadmin% /RESET /ALLUSERS
%Bitsadmin% /RESET
sc config bits start= disabled
net stop bits
net stop wuauserv

echo.
echo *******************************************************
echo ** Next, we delete the detection times
echo *******************************************************
reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update" /v AUState /f >nul 2>&1
reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update" /v NextDetectionTime /f >nul 2>&1
reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update" /v ScheduledInstallDate /f >nul 2>&1
reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update" /v DownloadExpirationTime /f >nul 2>&1

echo.
echo *******************************************************
echo ** Now we archive the log
echo *******************************************************
move "%windir%\WindowsUpdate.log" "%windir%\WindowsUpdate.log.%ISODATE%"

echo.
echo *******************************************************
echo ** Now we archive the internal datastore
echo *******************************************************
move "%windir%\SoftwareDistribution" "%windir%\SoftwareDistribution.%ISODATE%"
rmdir /q /s "%ALLUSERSPROFILE%\Application Data\Microsoft\Network\Downloader"

echo.
echo *******************************************************
echo ** Now we re-register all the DLLs (just to be safe)
echo *******************************************************
regsvr32 /s wuapi.dll
regsvr32 /s wuaueng.dll
regsvr32 /s wuaueng1.dll
regsvr32 /s wuauserv.dll
regsvr32 /s wucltui.dll
regsvr32 /s wups.dll
regsvr32 /s wups2.dll
regsvr32 /s wuweb.dll
regsvr32 /s qmgrprxy.dll
regsvr32 /s qmgr.dll
regsvr32 /s atl.dll
regsvr32 /s jscript.dll
regsvr32 /s msxml3.dll

echo.
echo *******************************************************
echo ** Now we restart the services, and see how we go
echo *******************************************************
REM We set it twice, first to auto then delayed; sometimes the sc call fails for -delayed, and we don't want to leave it in a Disabled state
sc config wuauserv start= auto
sc config bits start= auto
sc config wuauserv start= delayed-auto
sc config bits start= delayed-auto
net start bits
net start wuauserv

echo.
echo *******************************************************
echo ** Now we wait for 30 seconds, and then we trigger a client detection
echo *******************************************************
ping -n 30 127.0.0.1 >nul
wuauclt /resetauthorization /detectnow

echo.
echo *******************************************************
echo ** All done (fingers crossed!)
echo *******************************************************





Monday, 16 June 2014

Android SMS on 4.4 KitKat - Goodbye Hangouts, Hello 8sms

I tried, I really tried, to like Hangouts. I even tried to simply live with it. But, enough is enough, and it has now been replaced. Did I fail? No, this time, it's not me - so Goodbye Google Hangouts, Hello 8sms.


For those not in posession of a new-ish Android device, Google has replaced the standard SMS app in the current latest version of Android (v4.4, KitKat) with their own "Hangouts" app, instead of a dedicated SMS app. According to Google, this is supposedly superior (apparently it integrates with Gmail in some way, not that I noticed). However, Hangouts is the first SMS app in many years that left me confused from the very start; for instance, it took me much longer to work out how to create a new SMS in Hangouts than any other phone prior (and I've used Android since v2.1, and many, many other phones from many other manufacturers).

Usually, I assume user-error/ignorance with these things - it's up to me to adjust to a new way,  and the new way is better... and most often this truly is the case. But for Hangouts, it's been three months now, and I can say I know it's not me.

I'm going to skip any more of why I (and every single other person I have asked) think Hangouts is lacking in basic usability, and will jump straight into how to fix the problem: install 8sms from the Play Store (bonus marketing links here: https://play.google.com/store/apps/details?id=com.thinkleft.eightyeightsms.mms and http://8s.ms/.) 8sms is essentially the same stock SMS app which has been used as the default SMS app for all Androids v4.3 and prior; the developer has simply forked the AOSP code, added a few new bits including KitKat compatability, and posted it with the moniker "8sms".

Upon installation, 8sms imported all of my existing SMSes from Hangouts, and (with a few security prompts) switched itself to be my default send/recv SMS app. So simple - and now SMS on my phone works like SMS always has (better, actually). It's good when things Just Work Properly.

Update, Oct 2014:  As lovely as 8sms is, the developer has recently added advertisements into the base app, which is a "feature" not present when I switched apps and made my original blog post. This may or may present a problem for you: Apparently you can make a donation & remove the ads, but I haven't updated the app yet, so I can't verify any of this info. I'll leave it up to your own sound judgement as to whether paying for an app is worthwhile.

Wednesday, 28 May 2014

Git: Just Getting Started, Barely Scratching the Surface

Scene: In my (precious-little) spare time, I'm working on Project Euler. I haven't done any proper mathematics since I left Uni >10 years ago (when_where I completed a Major in Pure Mathematics), so it's all feeling more than a little rusty. I'm also teaching myself Python and again working on projects bigger than 10-line infrastructure scripts, so it's a good [re-]learning experience.

So, 10 Euler Problems in, I'm realising that certain patterns recur in the Problems, and it will make my life *a lot* easier if I use modules & descriptive functions names, and create reusable code. I am also realising that I strongly need source code control - I've only ever really used Subversion (which I did quite like, but never really used many advanced features), but why not throw another log on my bonfire of learning? So additionally learning Git it is.

Topics covered:

I plan to cover off:
  • Creating a new Git master/shared repo based on an existing unmanaged set of files [source code] from a client machine
  • Create some cloned repos
  • Basic file checkins.
Things I know are missing from this post:
  • Branching in Git
  • Multi-master push/pull stuff (which Git was truly built for)
  • Updating your local client's repo from the main shared repo which someone else has updated (which I think is called "rebasing" in Git-land)
  • Anything else not-completely-basic
Assumed:
  • All steps below assume use of Linux/*nix - please adjust Windows/Other commands as you require
  • Access from your client machines' user accounts to a shared host via ssh (ssh access from client->server)
As per previous blog entries, this is less about education on the topic as a whole, and more about recording the steps so that I can do it again. This probably deserves an extra warning: you may well not learn anything in this post that isn't better documented elsewhere. Also, I'm not really a Developer, so again, there are probably better examples to follow elsewhere. That said, I haven't seen this particular steps/combo in my explorations, so I feel the need to record it.

Quick note: I'm using Git a bit like Subversion (ie single-master)

I know Git is fully decentralised source control, and I'm not using it in the best/most idiomatic way, but as an opening gambit I need to cocoon myself in the ideological constructs that I know already. So, we're going to construct a Git repo system setup in the way I used to use Subversion. The following steps will take a create a shared repo on a shared-access host that you can think of as a [master] subversion repo, and then use this repo as the central source of checkins and checkouts.


Step 0: Configure Git on all hosts (Shared and Client)

Make sure git is installed on all systems (run as root):
if [ `which yum` ] ; yum install -y git; fi
if [ `which apt-get` ] ; apt-get install -y git; fi

Configure your user account(s):
git config --global user.email "email@email.com"git config --global user.name "My Name"

(I'm sure I'm missing other useful user info here, but the above info is minimally required for later commits).

Bonus extra step: configure user-wide git file excludes (ie: tell git to permanently ignore certain files/types/directories when assessing the checkin status of a working directory). This helps to ignore temporary/overwritten/cache files.

# Create a global "ignore" file in your homedir. This is minimal file based on what has annoyed me so far - see references below for better ideas
cat ~/.gitignore_global > <<EOF
# Python Byte-compiled / Optimised / DLL files
__pycache__/
*.py[cod]

EOF

git config --global core.excludesfile ~/.gitignore_global
References:
Guide on creating Ignore files : https://help.github.com/articles/ignoring-files
A lot of pre-configured Ignore files: https://github.com/github/gitignore

Step 1: Create a "Master" Shared Repo

Create a Shared Repo on a shared server ("SharedHost01") that you have ssh access to. I have run this command on SharedHost01 itself - there may be other ways to do this.

cd ~
mkdir projectName.git
cd projectName.git
git init --bare

Step 2: Create a Client Repo (from Existing Content)

This step can probably be done better - eg skip the copy step and clone straight from master over the top of existing - I have no idea of the semantics of overwrite behaviour, I was just being overly cautious because I didn't want to lose my existing code!

This step is run on your client machine ("ClientHost01"), that has existing content you suddenly realise that you need to manage.
 Assumption:  existing project is in directory "~/projectName", so parent dir is ~.

# Create a backup of the original code in case we get this wrong
cd ~
mv projectName{,.orig}
mkdir projectName
cd projectName
# Now we pull down a blank project from Shared Host
git clone user@shareHost:projectName/
# Copy in the existing code
cp -rp ../projectName.orig/* .
# Add all content (aside from the globally-ignored files from Step 0...)
git add .
# Check it into you local on-machine repo
git commit -m "Initial ProjectName checkin, with existing codebase"
# Now push this "new" code back to the Shared Host
git push origin master

Step 3: Set Up a Second Client Repo

cd ~
mkdir projectNamecd projectName
# Now we pull down a the initial code we just checked in
git clone user@shareHost:projectName/

And that's it for now - the baove has pretty much worked for me so far, so here's hoping I haven't got it horribly wrong. :)

Other References:

I have used several pages from this guide: https://www.atlassian.com/git/tutorial/git-basics

Monday, 12 May 2014

Configuring a new Windows Machine


Today I learnt a new way to configure a new Windows machine: with a PowerShell package manager. A bit like like YUM for Windows! All thanks to https://chocolatey.org/. As a side-note, this package manager, although currently third-party, will be built into Windows itself fairly soon, via One-Get in Powershell v5.

This assumes you have Powershell >= 3 installed already.

Into a cmd Window:
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%systemdrive%\chocolatey\bin

And into a PowerShell Window, some useful stuff:
cinst notepadplusplus.install
cinst Firefox
cinst GoogleChrome
cinst 7zip
cinst flashplayeractivex
cinst flashplayerplugin
cinst putty
cinst sysinternals
cinst 
cinst procexp

cinst sublimetext2
cinst curl
cinst Wget
cinst winmerge
cinst wireshark



Remove Windows 8 Default CrApps:

And a bonus Windows 8 tidbit that I made up on my very own: how to remove a lot of crappy apps shipped by default with Windows 8.1 (and maybe in 8.0) in PowerShell:

$CrappyApps = @("Microsoft.BingFinance ","Microsoft.BingFoodAndDrink",`
"Microsoft.BingHealthAndFitness","Microsoft.BingMaps",`
"Microsoft.BingNews","Microsoft.BingSports","Microsoft.BingTravel",`
"Microsoft.BingWeather","Microsoft.HelpAndTips",`
"microsoft.windowscommunicationsapps","Microsoft.WindowsReadingList",`
"Microsoft.XboxLIVEGames","Microsoft.ZuneMusic","Microsoft.ZuneVideo",`
"CheckPoint.VPN","f5.vpn.client","JuniperNetworks.JunosPulseVpn","Microsoft.MoCamera","SonicWALL.MobileConnect")
 

$CrappyApps | % { Get-AppxPackage -Name $_ | Remove-AppxPackage 2>&1 | Out-Null }

Note, the above few commands haven't been run in production - I ran each Get|Remove command individually on my own machine, and then arrayified it all for this post, so there may be an error lurking in there.

Wednesday, 7 May 2014

Modifying Web Server SSL settings to current modern web standards


This is not a post talking about the whys and wherefores of the settings below; nor is it mean to be a discussion of SSL security in general - just a quick reference guide to setting SSL responses in different web servers & OSes. This post will be edited as I find the need to configure different servers and software.

Apache on Linux:

Edit /etc/httpd/conf.d/ssl.conf (on RHEL6; other OSes move this file around).

Comment out the current SSL settings:
#SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
#SSLProtocol all -SSLv2


Add settings:
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS
SSLHonorCipherOrder     on

# This next one cuts out IE6 on WinXP (good riddance)
SSLProtocol all -SSLv2 -SSLv3



Note/update, Oct 2014: The advice in this post predates the POODLE SSL attack, but already disabled SSLv3 anyway, so the recommendations have not needed updating in light of recent SSL attacks - it is still fully valid advice.

NTP on Cisco IOS


To round out my NTP notes, here is NTP on Cisco IOS.
Timezone is  Adelaide time, GMT+930 (Australian Central Standard, ie ACST).
Remote NTP server is Internode's - substitute for (or add to) your preferred ISP/pool.ntp server.

! Set timezone
clock timezone ACST 9 30
clock summer-time ACDT recurring 1 Sun Oct 2:00 1 Sun Apr 3:00

!Set NTP for local NTP server
ntp server 192.168.1.254
!Set NTP for remote NTP server

ntp server ntp.on.net

!Does this router have a DNS server set? If not, add it for NTP resolution
!ip name-server 192.168.1.100



Tuesday, 6 May 2014

NTP on Linux


A Linux host needs NTP set to ensure correct time sync. These commands below set this for Linux systems, with an emphasis on Australian settings (swap in other NTP servers for non-Australian servers). This can be set simply by pasting the commands below into a Bash prompt (as root).

This guide makes no attempt to check/enforce the security of the NTP server: issues such disabling commands such as "mon" are not covered here, and appropriate firewalling is assumed. The NTP config file contained in Red Hat Enterprise Linux has a secure-by-default config (in RHEL6, if not prior as well), and the commands below simply assume security and configure the time sources.

All commands are IPv6-compatible, although only IPv4 is used in the addressing below.

Please note that Red Hat Enterprise Linux 7 has introduced Chrony as the default NTP service instead of the venerable NTPd - see notes at the end for chrony config.

Systems with NTPd

Install NTP on the system, strip defined servers

# For yum/RHEL-based systems
if [ `which yum` ] ; then yum install -y ntp; fi
# For apt/Debian-based systems
if [ `which apt-get` ] ; then apt-get install -y ntp; fi
# Backup original config
cp -p /etc/ntp.conf{,.orig}
# Strip all default servers
perl -i -pe 's/^server/#server/' /etc/ntp.conf


# Optional: Configure local timezone
ln -sf /usr/share/zoneinfo/Australia/Adelaide /etc/localtime

Add Local Servers

cat >> /etc/ntp.conf <<EOF
# NTP servers
server 192.168.1.1 prefer # Set this to your local NTP-serving machine if you have one
server 3.au.pool.ntp.org


EOF


Add Internode or Telstra Servers

Only required if you are on Internode networks:

cat >> /etc/ntp.conf <<EOF

# Internode NTP server

server ntp.on.net

EOF


Only required if you are on Telstra networks:

cat >> /etc/ntp.conf <<EOF

# NTP servers
server tic.ntp.telstra.net
server toc.ntp.telstra.net

EOF

Optional Step: Allow Local subnets to query this NTP

This will allow other machines on your local network to query this NTP server. Remember to allow inbound port UDP:123 on your host firewall.

cat >> /etc/ntp.conf <<EOF
# Allow Local subnets to query this NTP
restrict 192.168.0.0 mask 255.255.0.0 nomodify notrap

EOF



Ensure NTP is started & starts on boot

Commands tested on RHEL6 only; other OSes left as an exercise for the reader.

service ntpd start
chkconfig ntpd on
 
 

Systems with Chrony (RHEL7 and others)

Red Hat Enterprise Linux 7 uses Chrony as the default NTP daemon - unless you have a good reason to use ntpd, then you can simply configure chrony the same way as above.

yum install -y chrony
# Show config
timedatectl
# Set timezone
timedatectl list-timezones | grep Adelaide
timedatectl set-timezone Australia/Adelaide
# Show NTP status
chronyc sources
# Change NTP config
perl -i -pe 's/^server/#server/' /etc/chrony.conf
cat >> /etc/chrony.conf <<EOF
# NTP servers
server 192.168.1.1 iburst # Set this to your local NTP-serving machine if you have one
server 3.au.pool.ntp.org iburst
EOF
chronyc sourcessystemctl restart chronyd.servicesystemctl enable chronyd.service


NTP on Windows


A Windows Domain needs NTP set on it's PDC Emulator to ensure the domain has correct time sync. These commands set this for Windows systems, with an emphasis on Australian settings.


This can be set simply by pasting the commands into a cmd window (I recommend setting NTP1 to your actual local NTP server/router if you have one). These commands are expected to work on Server 2008 onwards.

SET TARGET=localhost
SET NTP1=192.168.1.1
SET NTP2=2.pool.ntp.org
SET NTP3=3.pool.ntp.org
w32tm /config /computer:%TARGET% /update /manualpeerlist:"%NTP1% %NTP2% %NTP3%" /syncfromflags:MANUAL


Australian Settings

Same commands as above, just swap in these variables instead.

Internode version:

SET TARGET=localhost
SET NTP1=192.168.1.1
SET NTP2=ntp.on.net
SET NTP3=3.au.pool.ntp.org


Telstra version:

SET TARGET=localhost
SET NTP1=192.168.1.1
SET NTP2=toc.ntp.telstra.net
SET NTP3=tic.ntp.telstra.net

Confirm Settings

Validate this with the query command:
w32tm /query /configuration /computer:%TARGET%

Outlook: Migrate Outlook Profiles to Office 365 (Wave 15)


Office 365, the current Exchange-2013-based incarnation ("Wave 15"), has (semi-)easy migration options from on-premises... except for the client-side, if your client-side is an existing Outlook 2007/10/13 based setup. Third-party tools, such as MigrationWiz, can fill the gap (possibly quite well), but if these are not an option, then read on.

Wipe Outlook Profiles & Start  Again

Now, the fastest way forward to migrate a fleet is to simply wipe all Outlook profiles & start again. Not that nice, but as long as your Autodiscover is working (and it is, isn't it? :) then all a migrated user has to do is click "Next" a lot when next opening Outlook.

Current advice that I found around the web is usually to just delete this reg key:
HKCU\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles
but this results in a horrible, confusing prompt when next opening Outlook ("select profile") which helps noone migrate. So don't do this. :)


My advice is to create a Group Policy, which uses GPO Preferences to delete several more reg keys/values and also a Folder. Set each entry in the Preferences to only Apply Once, and to Run in Users Security Context. All of the settings are User Settings, and not Machine.

Make your GPO delete the following reg keys:
HKCU\SOFTWARE\Microsoft\Office\14.0\Outlook\Setup\First-RunHKCU\SOFTWARE\Microsoft\Office\14.0\Outlook\Setup\FirstRun
HKCU\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles
And to also delete the folder:%LOCALAPPDATA%\Microsoft\Outlook

Replace/add more keys for the value above of 14.0 with as many versions  of Outlook as you wish to cater for [14.0 = Outlook 2010, 15.0 = 2013, etc].

Then:
  • Set your GP's permission to only the users you wish to migrate (ie remove the Authenticated Users from the GPO; then for testing, make it only one test user; for Staged, simply add the each set of users as you stage-migrate them; for Cut-over, make it all users)
  • Set the Computer Settings of this Policy to Disabled (all above setttings are User)
  • Link the Policy to any/all OUs with User Accounts.

Can't I just use a PRF File with a Server Name?

Gone are the days (Wave 14 and prior) where you could simply apply a PRF file or other such niceities - Microsoft now hides the server name field and uses a per-user GUID (assigned when first migrating a user) as a Virtual Connection-Point. If your aim is to fully seamelessly migrate Outlook profiles (without  third-party tools) utilising manual configuration, then I leave this as an excercise to the reader (hint: you will need to create a per-user PRF file which sets up the VCP and uses the HTTP proxying connection attributes, and then apply this PRF to each respective user)




The Start

Hello and welcome,

This blog is primarily dedicated to small notes on IT stuff as I find it - in 15+ years in IT, I've found, used and implemented more tidbits than I could possibly remember. I have often found myself needing to refer to them 10+ years later, and wished I had recorded the more arcane bits earlier, rather than re-finding them again. So, I plan to simply record small items as I go, with  the aim of being a useful reference for myself and others in years to come.

Milton.