TangoRangers.com's Blog https://blog.tangorangers.com Misc crap and such Tue, 30 Aug 2022 00:01:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Creating Archives from BackupPC https://blog.tangorangers.com/2022/08/creating-archives-from-backuppc/ https://blog.tangorangers.com/2022/08/creating-archives-from-backuppc/#respond Tue, 30 Aug 2022 00:01:28 +0000 http://blog.tangorangers.com/?p=608 As I talked about in a previous post about BackupPC, it is a very powerful tool when is comes to doing self-hosted backups. The downside is when you want to archive out a machine. For example, you have backups of a host, but the host is long gone, and you just want to archive the data. Well there are 2 ways to go about this. Either you can use the web interface to create a restore.tar/zip file to download (which doesn’t always work, especially if done over the internet), or you can create the tar backup on the server, compress, md5, and download it using sftp. I like the second option. Mostly because I’m going through it right now. I have a backup server out in the cloud that I need to archive some 50 hosts from, so here is how I did it.

Simply log into the server and su to the backuppc user and go to where ever you want an archive.

/usr/share/backuppc/bin/BackupPC_tarCreate -h nameOfHost -n -1 -s '/home' / > ./home.tar

In the example above, I’m getting an archive of the home directory for host “nameOfHost”. You can do this for any backed up folder. Once done, you can create an md5sum of the file to help verify you got it downloaded right. You can also bzip2 the file and hopefully make it smaller. Even md5sum that one as well.

Either way, if is a great way to get very large archives created so you don’t have to go through the browser for everything. Feel free to script it, that’s what I did. I was able to start the archive and let it run over the weekend before downloading once the work week started again.

Did this command work for you? Did it not? What did work for you? Please let me know in the comments.

]]>
https://blog.tangorangers.com/2022/08/creating-archives-from-backuppc/feed/ 0
Use rclone to get dropbox working on linux again https://blog.tangorangers.com/2022/08/use-rclone-to-get-dropbox-working-on-linux-again/ https://blog.tangorangers.com/2022/08/use-rclone-to-get-dropbox-working-on-linux-again/#respond Mon, 29 Aug 2022 23:59:54 +0000 https://blog.tangorangers.com/?p=651 A while back, Dropbox dropped a lot of support for Linux, such as dropping XFS and EncFS, which broke a lot of users. It ended up causing problems for me at work because we use CentOS and all of the sudden, Glibc is now too old to even run dropbox headless. Eventually I gave up on Dropbox and started just using it for simple things through the web browser, but then I discovered rclone.

Using rclone, I was not only able to view everything in Dropbox (which by the way, my company uses Okta for single sign-on, and this still worked) but I was able to mount Dropbox to my local file system! For those of you familiar with webdav, this works in a similar way. When you “mount” Dropbox it doesn’t download anything like when you use the app. It all works online. Put files into the mounted folder, and they will upload.

Getting started is pretty easy, the following commands were taken from https://rclone.org/dropbox/.

rclone config
n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Dropbox
   \ "dropbox"
[snip]
Storage> dropbox
Dropbox App Key - leave blank normally.
app_key>
Dropbox App Secret - leave blank normally.
app_secret>
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> Y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[dropbox]
type = dropbox
token = {"access_token":"BIG LONG TOKEN HIDDEN","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
dropbox              dropbox

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q>

So I kind of cheated here, but basically once you are getting setup a new link will your browser, log into Dropbox and it will ask for rclone to be able to access Dropbox. Give access and your done. It is pretty easy. Now I named my dropbox dropbox, maybe dropbox wasn’t the best name to differentiate it, but oh well.

Once you get to this point you can do something like

rclone ls dropbox:

Which will get you a nice list of files you currently have in your dropbox.

Now for the fun part… mount.

There is a ton of information over here at https://rclone.org/commands/rclone_mount/ but really, all you need to do is

rclone mount dropbox:/ /path/to/mount/point &

You must background the process to get your shell back. The mount is only active while that program is running and it does not appear in your list of mounted drives in Linux. So running something like df will not show the mount point, but what ever user you are logged in as (or that ran the command) will see files when looking in that directory.

]]>
https://blog.tangorangers.com/2022/08/use-rclone-to-get-dropbox-working-on-linux-again/feed/ 0
Building OpenVAS in Slackware https://blog.tangorangers.com/2016/05/building-openvas-in-slackware/ https://blog.tangorangers.com/2016/05/building-openvas-in-slackware/#respond Wed, 25 May 2016 22:32:46 +0000 http://blog.tangorangers.com/?p=564 I’m a huge fan of OpenVAS. It is a great tool for probing your network and finding possible security holes. Many of you have probably heard of Nessus, another fantastic tool, but it can be pretty pricey. I would recommend it for business, but for home use, go for OpenVAS.

In many cases, I would recommend you setup a Linux distribution called Kali Linux. It has a lot of really good tools built right in, including OpenVAS, but I’ve started running into issues with it lately. I’ll run a scan, and the systems load gets so high it becomes completely unresponsive for days at a time, then fails to finish. I’m not sure what I’m doing wrong there, so I decided to wipe the machine and put my good ‘ol Slackware back on it. After using it for several weeks I have decided to leave it Slackware as those issues have disappeared. So now I’m going to point you in the direction to get OpenVAS installed, plus a few extras that will make things easier.

I’m going to assume you are familiar with slackbuilds.org and hopefully a wonderful tool called sbopkg, as some wonderful people over there have build script for OpenVAS that will make your life so much better. Kent Fritz has written a great guide on how to get going over on slackbuilds.org. Go through his steps then come back here.

FYI, I have build and used OpenVAS on both 32 and 64bit Slackware and even on ARM using a Raspberry Pi. I’ve only had one program (hiredis) fail to build using sbopkg, so I had to do it the old fashioned way and download the build script and source and build outside sbopkg.

Note that while going through the instructions over on slackbuilds.org, before running any type of sync command, stop the running processes like openvasmd and openvassd. This is because the first time you run them, they will require a large amount of memory and will crash on the Raspberry Pi (I’m not sure on the pi2, I haven’t tried yet). By ensuring those processes are not running, it will surely finish properly.

Welcome back… I’m assuming you followed the instructions over on slackbuilds.org and are ready to continue. Here are some tips and script to make like just a little easier.

First, edit some permissions:

chmod 755 /etc/rc.d/rc.redis
chmod 755 /etc/rc.d/rc.openvassd
chmod 755 /etc/rc.d/rc.openvasmd
chmod 755 /etc/rc.d/rc.gsad

Now we are going to create a bunch of scripts that will simplify everything.

/usr/bin/openvas-start

#!/bin/bash
echo "Starting OpenVAS Services"
/etc/rc.d/rc.redis start
/etc/rc.d/rc.gsad start
/etc/rc.d/rc.openvassd start
/etc/rc.d/rc.openvasmd start

/usr/bin/openvas-stop

#!/bin/bash
echo "Stopping OpenVAS Services"
/etc/rc.d/rc.gsad stop
/etc/rc.d/rc.openvassd stop
/etc/rc.d/rc.openvasmd stop
/etc/rc.d/rc.redis stop

/usr/bin/openvas-feed-update

#!/bin/bash
echo "Updating OpenVAS Feeds"
echo "Stopping OpenVAS if running..."
/usr/bin/openvas-stop
openvas-nvt-sync
openvas-scapdata-sync
openvas-certdata-sync
echo "Rebuilding Database"
openvasmd --rebuild
echo "You can start OpenVAS now if needed"

/usr/bin/openvas-setup

#!/bin/bash
test -e /var/lib/openvas/CA/cacert.pem  || openvas-mkcert -q
if (openssl verify -CAfile /var/lib/openvas/CA/cacert.pem \
    /var/lib/openvas/CA/servercert.pem |grep -q ^error); then
    openvas-mkcert -q -f
fi
openvas-nvt-sync
openvas-scapdata-sync
openvas-certdata-sync
if ! test -e /var/lib/openvas/CA/clientcert.pem || \
    ! test -e /var/lib/openvas/private/CA/clientkey.pem; then
    openvas-mkcert-client -n -i
fi
if (openssl verify -CAfile /var/lib/openvas/CA/cacert.pem \
    /var/lib/openvas/CA/clientcert.pem |grep -q ^error); then
    openvas-mkcert-client -n -i
fi
/etc/rc.d/rc.openvasmd stop
/etc/rc.d/rc.openvassd stop
/etc/rc.d/rc.openvassd start
openvasmd --migrate
openvasmd --rebuild
/etc/rc.d/rc.openvassd stop
killall openvassd
sleep 15
/etc/rc.d/rc.openvassd start
/etc/rc.d/rc.openvasmd start
/etc/rc.d/rc.gsad restart
/etc/rc.d/rc.redis restart
if ! openvasmd --get-users | grep -q ^admin$ ; then
    openvasmd --create-user=admin
fi

Here is a great program that can help find any issues while getting setup. This link is mentioned in Kent’s instructions. So hopefully you have it already.

wget https://svn.wald.intevation.org/svn/openvas/trunk/tools/openvas-check-setup -o /usr/bin/openvas-check-setup

Here we are going to chmod those files:

chmod 755 /usr/bin/openvas-start
chmod 755 /usr/bin/openvas-stop
chmod 755 /usr/bin/openvas-feed-update
chmod 755 /usr/bin/openvas-setup
chmod 755 /usr/bin/openvas-check-setup

WOW! That is a lot! Alright, so several files have been created. Here is what each one does.
/usr/bin/openvas-start:
This will start all the services needed.
/usr/bin/openvas-stop:
This will stop all the services.
/usr/bin/openvas-feed-update:
This will update all your feeds.
/usr/bin/openvas-setup:
This script will help if you have any issues. Sometimes OpenVAS feeds cause an issue, and by running this command you will find it fixes the problem 99% of the time.
/usr/bin/openvas-check-setup:
This one will help you diagnose issues.

Give it time:
When starting OpenVAS, each part is thrown into the background to finish loading. Depending on your computers speed, it can take a while before you can do anything. Best to watch with top, htop, or iotop to see when everything has finished loading. Then proceed to use GreenBone.

Possible Issues:
When trying to log in to the GreenBone Security Assistant, You might get an error that says the OMP service could not be found. Try running the openvas-setup-check. If you get an error saying there are no users, run openvas-setup. This will fix it. This is a problem I have seen several times in the past on both Slackware and Kali, so I believe it to be a bug somewhere in OpenVAS.

I think that’s just about it. You should now be up and running with OpenVAS!

]]>
https://blog.tangorangers.com/2016/05/building-openvas-in-slackware/feed/ 0
Fix BackupPC Not Getting All Your Windows Files https://blog.tangorangers.com/2015/06/fix-backuppc-not-getting-all-your-windows-files/ https://blog.tangorangers.com/2015/06/fix-backuppc-not-getting-all-your-windows-files/#respond Tue, 09 Jun 2015 20:04:14 +0000 http://blog.tangorangers.com/?p=571 BackupPC is a fantastic tool for backing up all your machines. I use it to back up both Windows and Linux machines. Linux is easy, all you need is SSH and rsync, but Windows is kind of a pain. You need to use Windows shares in most every case. In the future, I’ll talk about how to use Cygwin to use SSH and rsync to backup a Windows machine.

The problem that I have, is there is a bug in Samba versions 3.6 to 4.1 that will cause the tarbackup function to stop the backup before it finished, and BackupPC will report the backup was complete. I haven’t run into this with every Windows machine, but I have in most. Generally what causes this is using another user account to login and perform the backups, instead of using the normal user account. If you backup a Windows machine using the smb method and it appears not everything is being backed up, then this is the guide you want to follow.

To start, I’m currently running Debian 7 (Wheezy) with Samba version 3.6. I tried getting Samba 4.2 to build, but several of my libraries are out of date. If you are currently running 4.0 or 4.1, you might be able to build 4.2 on your server. Otherwise, go with 3.5.22 (being the latest 3.5 series at the time of this writing). (https://bugzilla.samba.org/show_bug.cgi?id=10605)

There are several packages that need to be installed for this to work. Every config is different, but all I have to install was autoconf, make, and gcc.

apt-get install autoconf make gcc

Now we need to download the Samba sources and build it, but not install.

cd /opt
wget https://download.samba.org/pub/samba/stable/samba-3.5.22.tar.gz
tar -zxf samba-3.5.22.tar.gz
cd samba-3.5.22/source3/
./autogen.sh
./configure
make

That was the hard part, if Samba didn’t build correctly, you might be missing other packages. You maybe told what they are, otherwise, Google.

Now set the path for $SmbClientPath to /opt/samba-3.5.22/source3/bin/smbclient. You can either change the $SmbClientPath in your backuppc config, or just change it for hosts that are having issues. If you are reading this, I’m going to assume you know how to do that.

Now test (this will do a FULL backup, so it can take some time):

/usr/share/backuppc/bin/BackupPC_dump -v -f 

You can watch as it goes along (note that you will NOT see it running in the GUI). This can take some time, but when complete you will have an idea of if everything worked or not.

Did this work for you? Did you build Samba 4.2 or newer? Let me know in the comments.

]]>
https://blog.tangorangers.com/2015/06/fix-backuppc-not-getting-all-your-windows-files/feed/ 0
My Triumph And Trials In The Removal of Windows Server From the Network https://blog.tangorangers.com/2015/05/my-triumph-and-trials-in-the-removal-of-windows-server-from-the-network/ https://blog.tangorangers.com/2015/05/my-triumph-and-trials-in-the-removal-of-windows-server-from-the-network/#respond Thu, 07 May 2015 23:12:06 +0000 http://blog.tangorangers.com/?p=555 A Little History

I’ve been working for this small company for just over a year and a half. This is my first true system administration job. I’ve had several in the past where I did admin work part time, or on the side of my normal duties, but finally, after years of trying, I managed to land a great position. Most of my work consists of Linux servers, running everything from Ubuntu, to Debian, CentOS, and even a couple Slackware servers. Some are in house, some are in the cloud. Feels great to finally be doing what I love. There was just one problem… Windows. You see, many years ago my company was part of a much larger organization with hundreds of employees. This smaller company spun off and took with them 2 Windows Server 2003 systems. These server once did a lot of things. Managed E-Mail (Exchange), printers, file shares, Active Directory, VPN, and internal DNS/DHCP. Since I started, I’ve been fighting the AD (short for Active Directory) servers trying to keep it running (I should note, that I’m a pure Linux guy, I don’t know Windows hardly at all, and definitely not Windows Server). We also dropped down the number of users who actually used the AD. Most people in this newer, smaller company, all ran their own versions of Linux.

Damn Windows

A short time ago I informed management that our Windows Servers would not get updates for much longer, and we should plan on a migration. After talking to Microsoft and getting clarification on pricing, I found just to upgrade would cost somewhere about $3,000. While this is not much in company money, it also involves a lot of future headache for me having to keep them running, plus I’ve heard horror stories when dealing with the actual upgrade. I took it upon myself to get rid of this crap and just be done with it.

Now don’t get me wrong here. Sure, I’m not proficient with Windows Server, but I wouldn’t get rid of something that worked just fine. I’ve spent a lot of time keeping these servers running. They reboot at random times, half the patches fail to install, and sometimes the services on them will just stop working. The problem here is that these servers do not work! When they do reboot, they take 12 minutes and 34 seconds to boot (quad-core Xeon 3.6Ghz, 8GB RAM, 15K HDs on RAID 5). During that time, DNS stops working and I get complaints that the internet isn’t working, or it is very slow. For those who know what happens when your primary DNS goes down, you know what I’m talking about. I even had times where after a reboot, the services fail to start. Lets just say, these servers are very broken, and I don’t believe that Microsoft is worth keeping around when so few computers even need to be on Active Directory in the first place.

Lets Do This

The first step would be to find another DNS/DHCP solution. Easy, I’ve done this many times before. So I have the new server ready to put in place. I should note that we have a weird network configuration, and as of the writing of this portion, I have no idea if my solution will work. We have several VLANs and DHCP is “helped” through the system to get to the correct server. I’m not going to go into the technical details of how this all works (partly because I don’t fully understand it myself), so you’ll just have to hope I figured it all out.

As far as VPN goes, pptp is bad, weak, and apparently not a good one to use. I opted for using OpenVPN. Turns out of Windows setup is a little more difficult for the non-tech savvy users, but we will manage.

What Am I Doing?

Now, the nightmare begins. Removing machines from the AD. I was given 2 computers when I started at this company. A Windows laptop, and a desktop where I could install any Linux distribution I want. I started the AD removal on my laptop and everything went perfectly. At this point, I figured, how hard could this be. So I removed it from a Windows Server than manages our phone system (no one touches that machine, we are all too afraid). This one wasn’t too bad, but it did kill the backup user. Once I added it back in, all is good. I’m still getting backups. Next came a very old XP machine that hosts QuickBooks. After removing it from the AD, I couldn’t login! OK, boot a recovery disk, wipe the SAM Tables (Microsoft’s password database), reboot, add password, done! Woohoo… well, no. Turns out it had a share for our CFO. Crap. It took me a while, but I finally got that share working and he was able to access QuickBooks. As before, this one became broken on the backup system, but it was an easy fix as all I had to do was change out the user our backup software uses to connect to the shared folder. All is good.

Before going too far, I want to let you all know that I’m writing this as things are happening. At this point I still have a machine in our conference room, 1 Windows 8, 4 more Windows 7 (one of which I’m worried about since it is our CFO’s machine), and a bunch of XP machines in our dev lab. Yep, XP. We have older equipment where the software will not run on anything above XP, so I have to keep them very hidden and spend a little extra time on them to ensure they are OK.

Anyway, I ended up having another issue when doing out conference room PC. It seems, this computer didn’t keep some of the group policy settings that others did. Granted, I was able to just change the backup user, but I actually had to create another share. What a pain right? To make matters worse, I also had to edit the firewall settings to allow ping (my backup software requires I can ping the machine) from any subnet, and allow SMB access from any subnet. You see, the backup software host and most other computers are on different subnets, so I had to adjust for that. Live and learn I guess.

Pressing On… And On… And On…

Here I am again, Thursday morning (almost a week after starting this whole process), wanting to remove more machines from the AD. Then I thought about it again. Due to the configuration of our switch, and the need to forward DHCP packets correctly to the Windows Servers, what are the chances that this DHCP helper option won’t work correctly in Linux? While the switch does have fantastic documentation, it doesn’t tell you squat about what to do on the DHCP server side. My heart is racing, fears are rising. What if I can’t get this to work? Am I doing to be stuck running Windows Server forever simply because I don’t understand this very complex switch? This thing’s config hasn’t been touched since 2008, and even contacting someone who worked with it back then (pretty cool of the guy to talk to me after not being with the company for over 5 years) proved no help as he worked with 3 other people on getting this thing set up, and he didn’t know the CLI password (which is where I have to change these settings). I can’t get a hold of anyone else from back then. Looks like I’m on my own… again.

Well, I can’t test this today, it would interrupt everyone. Looks like I’m going to have to work on something else for the next couple of days then come in over the weekend.

Fast forward a couple days, and here I am. Easter Sunday… at work. Oh well, it worked out pretty well this way. Sure, I would like to do spending time with family, but since everyone else is, I figure now is the perfect time to take down the network and move everything over.

So it begins, nice and early. First I disconnect the Windows servers from the network, then I change the IP address associated with the Linux box that will control this network. All services now up and running.

Time to Test

Well crap, it seems Windows computers had some issues getting DNS updates. Sometimes I can refer to the other computer by name, sometime by the full name (meaning adding the internal domain name to it), but only sometimes. After spending hours working on it, I still have no solution… I’m starting to think many of these had issues before, and it could have something to do with their own hostnames. After all, Linux likes it when you give it a domain name. Either way, it works as good as before… I think. So I continue on.

Had one Linux server get an IP address outside the DHCP range. I have no idea how this is possible. Screw it, you get a static IP and DNS name. Fixed.

After getting through a couple machines that just didn’t want to place nice. I got through the rest without hardly an issue. It actually went much more smoothly than I thought it would. After a few hours, the network was up and running again!

Now, unfortunately, I wasn’t done yet. There were a few more items that needed to be dealt with. First, the new OpenVPN. Done. Oh… that’s right, just needed to make a simple change, and everything works. I actually forgot to adjust the server IP in the configuration files to reflect the server’s new IP. Tested, and working great. Cool.

D’oh!

What about pptp you ask? Well, yes, I did want to get rid of it. The problem was many of my users were still setup to use it and hadn’t been given their new keys with OpenVPN. I’ll deal with that next week, but the problem remains of ensure they can still get in. So I fire up a machine and get to testing (using a 4g modem so I’m outside the corporate network). Connecting… connecting… verifying username and password (I didn’t know pptp was this slow, holy crap!)… damn. It just isn’t going to work for me. I’ve actually never used the old VPN, so I have no idea if it would have ever worked before or not. I hate to say it, but I think I’m going to have to wait until tomorrow morning and see who complains.

UPDATE: Upon further investigation, I’ve found that Windows 2003 will NOT let you use VPN unless it can run the DNS/DHCP. Shit… Just another reason to move away from Windows.

Up and Running

Now, where was I? Oh right. So, everything is up and running. I found a few machines that were given static IPs from the Windows servers, but were not listed, so once they got their new addresses I found their services not working. This is because of our firewall rules. So I adjusted the rules and set static IPs for those systems, so they should now always keep those addresses.

Most internal DNS is working without having to put in the domain name. I’ll work on the rest of those throughout the week. I’m not anticipating this will cause any issues with my local users (users from the other office still have to type the internal domain name. I’ll work on that later). So Maybe now I’ll finally call it a day. I’ve been here for roughly 12 hours now, and I would like to call it a day.

From this point, all I need to do is gets all the workstations off the AD that no longer exists. I worry there will be issues just leaving them alone since I have no idea how long it will take before these workstations say they can no longer login from the missing AD controller. So over the next week I’ll get this taken care of. Just need to get backups working and any mounted drives working for each user, then I’m done! Oh I can’t wait!

Getting There

Fast forward a bit here and another week has passed. During this time I’ve ended up with just 2 desktops that needed to be taken off the old AD. One is Windows 7, the other is 8.1. During this process of getting off the AD, I do have to reboot and perform some work on the firewall, so I like to set up times where I can do this with each user.

First came the Windows 8.1 machine. Oh man do I hate Windows, especially 8.1. This thing caused all sorts of problems. There is so much hidden shit all over Windows, that is just drives me insane. I couldn’t get his account to login because of group policy. That was the actual error. Access denied by group policy. So I checked the group policy on the computer… there wasn’t any! Eventually I had to just create him another account, under a different name, and copy his personal files over. What a joke. I even had something of the same type of issue with the Windows 7 machine. Which sucked because the guy handles all our finances and I really didn’t want to cause issues for him. His wasn’t nearly as bad, but it ended up as where I had to copy his files over to a new account anyway. Fortunately he was able to keep the same user name. Some things ended up in other folders, so after several hours, we located all his files and got him all setup and good to go. I was pretty happy about that. I always like when things work out pretty well when they seem to be going so badly.

Userland

Speaking on Windows and their userland. I hate how Windows handles this. It makes it very hard to move over to a new machine with all your same settings and everything exactly how you like it. I ended up screwing up one email account because he uses Outlook, and apparently you can’t just “import”. You have to export, then import, and you can’t just copy config files over. This is according to MS! This is why I run Linux. Last time I moved from one machine to another, I copied (scp) my userland over to the new computer, started X (I like KDE), and guess what? Everything was there exactly how I had it on the other computer! Amazing! Also, I know there are tools provided by Microsoft designed to help with this, unfortunately those don’t work in my situation.

DONE!

So here I am, almost 3 weeks after starting this project, and I’m finally done. Every computer is getting backups, they can access and be accessed from other VLANs (I know, I know, that’s not how you use VLANs, shut it, I like it). It has been a pain, and I wouldn’t recommend it to anyone. Especially if you are the ONLY one doing it, and you are not a Windows guy. So in the end, here is my advice. If you are running AD, just keep giving MS all your money and hope it keeps working. If you are not running AD. DON’T! Stay away! If you are going to have it, be sure to hire someone just to handle it, and make sure that is their only job.

Thank you for letting me tell my story, and if you made it this far, good on you!

UPDATE: I wish I saved the link, I found something that MS apparently does with the newer version of Windows Server. I already knew that I have to pay a lot just for the OS, but then I have to pay an additional price for each user, or CAL as they call it. Well, apparently CALs are not just for users of the Active Directory. You have to have one for each machine that uses the DHCP! I have a lot of Linux servers and even more as Virtual Machines. I refuse to keep giving money to MS for each machine just to use DHCP/DNS. That is a load of crap. Some people who commented on the article said you don’t really have to, but if MS decided to audit your network, you could end up having to pay a lot of money. I don’t know how true this is, but I wouldn’t be surprised. Glad I got away from that train wreck.

]]>
https://blog.tangorangers.com/2015/05/my-triumph-and-trials-in-the-removal-of-windows-server-from-the-network/feed/ 0
Adafruit touchscreen on a Raspberry Pi B running Slackware ARM. https://blog.tangorangers.com/2015/02/adafruit-touchscreen-on-a-raspberry-pi-b-running-slackware-arm/ https://blog.tangorangers.com/2015/02/adafruit-touchscreen-on-a-raspberry-pi-b-running-slackware-arm/#respond Sat, 07 Feb 2015 06:51:08 +0000 http://blog.tangorangers.com/?p=534 I recently had the opportunity to get a friend a new Raspberry Pi Model B. I really like these, I have several all running Slackware ARM. While I have tried other distros, I find myself always going back to Slackware after a while for one reason or another, but that is a talk for another day. My friend decided he wanted to run Slackware on one of his Raspberry Pis, so I helped him out and get everything installed. Then I was presented with another issue. He wanted to use his Adafruit touchscreen. Now, Adafruit’s documentation and setup guides are really good, but only if you are running Raspbian (or a Debian based system). So that did present an issue, but one I wanted to solve.

Before I continue, please note that I did get this working (video and images at the bottom of the post), but I had to deliver the Pi back before I got a change to try again from scratch. This guide is based heavily off my best recollection of the steps I took, and may not be complete. If I’m missing something, or you can’t get it to work, let me know and I will try to help… or send me a screen so I can do this again.

Start with checking out this guide: https://learn.adafruit.com/adafruit-pitft-28-inch-resistive-touchscreen-display-raspberry-pi/software-installation it was helpful in getting everything going.

Now, download all the needed files:

wget http://adafruit-download.s3.amazonaws.com/libraspberrypi-bin-adafruit.deb
wget http://adafruit-download.s3.amazonaws.com/libraspberrypi-dev-adafruit.deb
wget http://adafruit-download.s3.amazonaws.com/libraspberrypi-doc-adafruit.deb
wget http://adafruit-download.s3.amazonaws.com/libraspberrypi0-adafruit.deb
wget http://adafruit-download.s3.amazonaws.com/raspberrypi-bootloader-adafruit-20140917-1.deb

Download and install deb2tgz (https://code.google.com/p/deb2tgz/). This will help you convert those deb files to tgz for Slackware.

Now convert those deb files to tgz (deb2tgz *.deb)

Then install (installpkg *.tgz)

Now, you need to make a copy of raspberrypi-bootloader-adafruit-20140917-1.deb and place it in another directory. Once there, run:

ar x raspberrypi-bootloader-adafruit-20140917-1.deb

This will explode out the archive. Find the file called data.tar.gz, and run:

tar -zxf data.tar.gz

Now there will be some new directories. One is called boot. Make a backup of you /boot directory, then copy everything in that new boot to /boot.

cp -r /boot /boot.bak
cd boot
cp * /boot

This will install the correct kernel that you need to use.

Next, open /boot/config.txt. The only line you need is gpu_mem=32.

Now, there are a few more packages you need to install. First is called evtest. I found an awesome slackbuild repository located over at https://github.com/PhantomX/slackbuilds.git, and we are going to install his evtest package.

git clone https://github.com/PhantomX/slackbuilds.git
cd slackbuilds/
cd evtest/
./evtest.SlackBuild 
installpkg evtest-1.32-x86_64-1root.txz 

Notice how the arch listed in the Slackware package as x86_64, don’t worry, it works, just install it.

Next is tslib. Here is how I built and installed it (also, I cheated and did not build a Slackware package).

wget http://ftp.de.debian.org/debian/pool/main/t/tslib/tslib_1.0.orig.tar.gz
tar -zxf tslib_1.0.orig.tar.gz 
cd tslib-1.0/
./autogen.sh 
./configure
make
make install

Last, we need to build a package called xf86-video-fbturbo. (Forgive me, you may not need to run make in that first directory, but definitely in the src directory)

git clone https://github.com/ssvb/xf86-video-fbturbo
cd xf86-video-fbturbo/
./autogen.sh 
make
cd src
autoreconf -vi
./configure --prefix=/usr
nano xorg.conf 
make
make install

There, that was fun! Alright, lets edit a few more files.

Open /boot/cmdline.txt, and place this one line in there (it is the only line for me, you maybe different)

dwc_otg.lpm_enable=0 console=tty1 nofont root=/dev/mmcblk0p3 fbcon=map:10 fbcon=font:VGA8x8 rootfstype=ext4 rootwait ro

Then open /etc/X11/xorg.conf.d/99-calibration.conf (if the directory or file does not exist, create it!) and place this in the file.
Note the commented out items, I meant to experiment with them, I don’t remember if those options break anything, but I doubt it.

Section "InputClass"
    Identifier "calibration"
    MatchProduct "stmpe-ts"
    Option "Calibration" "3800 200 200 3800"
    Option "SwapAxes" "1"
EndSection

Section "Device"
        Identifier      "Allwinner A10/A13 FBDEV"
        Driver          "fbturbo"
        Option          "fbdev" "/dev/fb1"
#        Option          "SwapbuffersWait" "true"
        # `man fbturbo` to know more options
#        Option          "AccelMethod" "G2D"
EndSection

Section "Monitor"
    Identifier "Monitor0"
    Option "DPMS"
EndSection

Section "Screen"
    Identifier "Screen0"
    Device     "main"
    Monitor    "Monitor0"
    DefaultDepth 16
    SubSection "Display"
            Depth 16
            Modes "320x240"
    EndSubSection
EndSection

Lastly, open /etc/rc.d/rc.local and add these lines:

modprobe spi-bcm2708
modprobe fbtft_device name=adafruitrt28 rotate=90 frequency=32000000
export FRAMEBUFFER=/dev/fb1

I also recommend adding the export framebuffer to your user’s ~/.bashrc file. You need that before X will start.

I really hope I got this all right. Feel free to complain. Maybe I’ll get another chance to play with this again in the near future, and this time, I’ll get it right! I wish you all the best of luck! Cheers,

IMG_20150206_224415IMG_20150206_224423

If the video above isn’t loading, you can view it here: https://www.youtube.com/watch?v=KpzBYshxY9c

]]>
https://blog.tangorangers.com/2015/02/adafruit-touchscreen-on-a-raspberry-pi-b-running-slackware-arm/feed/ 0
Netflix on Slackware https://blog.tangorangers.com/2014/08/netflix-on-slackware/ https://blog.tangorangers.com/2014/08/netflix-on-slackware/#respond Sun, 31 Aug 2014 14:38:42 +0000 http://blog.tangorangers.com/?p=516 UPDATE! This doesn’t seem to be required anymore. Netflix should work fine as long as your Mozilla-NSS is up to date and you are running Chrome 39 or higher!

Getting Netflix to run in Linux has been in the news again. Before you had to use pipelight and wine to get everything running. Even then, I’ve heard it doesn’t even work all that well. I never did try it myself because I run Slackware64 without multilib, so I can’t even execute 32bit applications like wine.

Fortunately, some very smart people have figured out how to get real native Netflix working in Linux. Many of the sites out there show you how to do it with Ubuntu, but that didn’t work for me, which I’ll explain later.

Lets get going. First you need Chrome 37 or newer. Snag the build scripts from the Slackware extras section: http://mirrors.slackware.com/slackware/slackware64-14.1/extra/google-chrome/. Change if you are not running 64bit. This also works (from what I’m told) if you are running Slackware 14.0

Next snag the latest deb from Google, https://www.google.com/chrome/browser/ and select either the 32bit or 64bit download depending on what you’re running.

After running the build script, you will have a package ready to install in your /tmp directory. The best part of the chrome build script is it will figure out the version number of Chrome before building the package. Install the package using installpkg.

Before moving on, I want to explain the difference between the Ubuntu and Slackware setup. As I started writing this, I had a package that was just a tiny little bit out of date. One of the requirements is libnss. At the time I was running version 3.16 and you need 3.16.4. I found this great post over on linuxquestions.org which gives instructions on building the newer version of libnss. Just before posting this I found that the wonderful Slackware maintainers updated it for me! Just make sure to fully patch your system and you will get mozilla-nss-3.16.4.xxxxx.txz. If you actually read this entire paragraph, I’m impressed. Thanks for making it worth my time writing it.

Start up Chrome and go to https://chrome.google.com/webstore/detail/user-agent-switcher-for-c/djflhoibgkdhkhhcedjiklpkjnoahfmg and install. Once installed, right click on the new icon and select options. Here we are going to setup a new custom User Agent that will allow Netflix to play. Put the following options in the fields.

New User-agent name: Netflix Linux
New User-Agent String: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2114.2 Safari/537.36
Group: Chrome
Append?: (defaults are fine)
Indicator Flag: IE

Be sure to select your new User Agent and then login to your Netflix account. Once in go to your account settings and select the option to use HTML5 over Silverlight. Enjoy watching Netflix in Linux!

]]>
https://blog.tangorangers.com/2014/08/netflix-on-slackware/feed/ 0
Things learned while using Perfect Forward Secrecy https://blog.tangorangers.com/2014/07/things-learned-while-using-perfect-forward-secrecy/ https://blog.tangorangers.com/2014/07/things-learned-while-using-perfect-forward-secrecy/#respond Tue, 22 Jul 2014 22:38:09 +0000 http://blog.tangorangers.com/?p=508 Recently I upgraded some on my servers to use PFS. For those not familiar with PFS please read https://en.wikipedia.org/wiki/Forward_secrecy. Now I must admit, I have a strange setup of servers. I have a set of servers for each client and each performs a different function. One is an app server that can be accessed by client.company.com. These servers run nginx. I Also have a Nagios and Backuppc server that are accessed by nagios.custom.company.com. I purchased a wildcard SSL certificate for *.company.com. Before you say anything, I realize that using multilevel subdomains is not complaint with wildcard certs (see RFC 2818 and RFC 2459). Nagios is ONLY for me, not the customer, so I don’t care about the SSL warnings. To help illustrate:

customer.company.com -> nginx
nagios.customer.company.com -> Apache

Unfortunately, I’m pretty locked down on what OS I can run, and currently it is Ubuntu 12.04 (Yes, 14.04 is out, but it has too many issues and our software don’t run stable under it). Which means I have nginx 1.6 and Apache 2.2. Nginx 1.6 supports PFS, which I implemented without an issue, while Apache 2.2 does… but doesn’t. It appears to be some hack job by Ubuntu (Version 2.2.22 doesn’t have PFS, but 2.2.27 appears to have it. As we know, Ubuntu will back-port patches, and it looks like a patch got in to give semi-support to PFS, I decided not to use it).

Here is the interesting thing. With this setup, whether I setup PFS on Apache or not, I got the same results. The instant I setup PFS on nginx, I could no longer use the Nagios server on the Apache machine. In Firefox I would get an SSL error that I couldn’t bypass. No matter what, it wouldn’t work, even with other browsers.

So, I did a test. What would happen if I followed the rules of using wildcard SSL certs? So I changed the Nagios server to work under nagios-custom.company.com, and guess what? It actually worked!

I have no idea how this is. It seems that the browser is remembering something about the certificate, because it is the same one used on all servers. I tested this by having a machine connect just to the nagios server (before the URL change), and it worked until I accessed the app server with the same browser.

I’m now following the rules on wildcard SSL certs, and my naming is as follows

customer.company.com -> nginx
nagios-customer.company.com -> Apache

Now everything works, even though I’m not using PFS on the Apache servers.

If you want to setup PFS, check out this page, https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy, as it has great information. Be sure to use https://www.ssllabs.com/ssltest/ to test your server.

Lastly, I put this up in the hopes that is anyone else runs into this issue it will provide some insight to resolving the issue. If you have any additional information, please leave a note in the comments below (You don’t even have to give a real email, if you do, I won’t spam you or anything).

]]>
https://blog.tangorangers.com/2014/07/things-learned-while-using-perfect-forward-secrecy/feed/ 0
Another way to limit SSH key access to a specific IP https://blog.tangorangers.com/2014/01/another-way-to-limit-ssh-key-access-to-a-specific-ip/ https://blog.tangorangers.com/2014/01/another-way-to-limit-ssh-key-access-to-a-specific-ip/#respond Mon, 27 Jan 2014 00:27:21 +0000 http://blog.tangorangers.com/?p=487 There are a couple ways to limit where users can and cannot use SSH keys for quick access to systems. The most common I have found is to put access requirements in the sshd_config file. While this is a great method, and has quite a few ways to customize it. What if you only want to limit access for one user? Or maybe you don’t have root/sudo access on the machine and you want to limit your own account. Well, these might be the best examples, but here is another nifty way to limit access with SSH keys. Please note that this will only block access with the key, but will still allow you to provide a password!

Lets get started. I’m going to assume you already have generated your key and used ssh-copy-id to get it to your remote host. SSH into your remote host and open your user’s authorized_keys file. This is usually in ~/.ssh/. You are going to see something like this.

ssh-rsa AAAAB3Nza...LiPk== user@host

Simply add from=”host” in front of it and you are set! Seriously, it is that easy.

from="192.168.1.3" ssh-rsa AAAAB3Nza...LiPk== user@host

The best part here is you don’t need to do anything for the changes to take effect. No restarting sshd or even logging out of the machine! It just works right away.

You can add other options in authorized_keys and not just limit by IP. What if your IP changes? Maybe you are always in a particular domain range somewhere, perhaps you want to force a VPN connection before allowing SSH (I’m just throwing out ideas here). I use it for my backup system. I allow root access only from that one machine and otherwise you can’t login via root at all. I only allow the one key and no password. So right there is a good use.

To see more options and additional syntax, open a terminal and type “man authorized_keys” and scroll down to the section titled Authorized_keys File Format. There is a lot of good information there.

]]>
https://blog.tangorangers.com/2014/01/another-way-to-limit-ssh-key-access-to-a-specific-ip/feed/ 0
Quick and dirty guide to OpenVPN on Slackware Linux and Android https://blog.tangorangers.com/2012/11/quick-and-dirty-guide-to-openvpn-on-slackware-linux-and-android/ https://blog.tangorangers.com/2012/11/quick-and-dirty-guide-to-openvpn-on-slackware-linux-and-android/#respond Sat, 01 Dec 2012 04:16:44 +0000 http://blog.tangorangers.com/?p=418 Like many of you, I’m concerned about security, especially when working remotely. Generally I would simply create a tunnel using SSH, but then I must set all my programs to use the socks5 tunnel. This isn’t always possible without first opening the program, which will generally try to form a connection. Perhaps, not the best way to keep safe on a network you don’t trust (like a coffee shop).

Unlike using SSH to create a secured tunnel, which requires setting proxy settings for all your programs, using something like OpenVPN you can redirect all your traffic through the encrypted tunnel without having to configure anything. All thanks to using iptables.

Here is my quick and dirty guide on getting your very own OpenVPN server setup on Linux, as well as setup for two types of clients. One being a Linux client, the other being Cyanogenmod’s Android.

With this guide, I’m going to assume you already have OpenVPN installed and ready to go. Also that the configuration files are in /etc/openvpn/

Server Setup

First off, we need to generate some keys. This will be used to secure the connection. OpenVPN comes with all the tools you need to generate keys and indexes. Look for the easy-rsa directory that comes with OpenVPN. In my case, it’s in /usr/doc/openvpn-2.2.2/easy-rsa/2.0/

In that directory you will see a lot of scripts. Before doing anything you need to edit the file vars. In this file are several settings. Most important is with dealing with the openssl key. Here is a quick example you can base your configuration off of with all the comments removed.

export EASY_RSA="`pwd`"
export OPENSSL="openssl"
export PKCS11TOOL="pkcs11-tool"
export GREP="grep"
export KEY_CONFIG=`$EASY_RSA/whichopensslcnf $EASY_RSA`
export KEY_DIR="/etc/openvpn/keys"
export PKCS11_MODULE_PATH="dummy"
export PKCS11_PIN="dummy"
export KEY_SIZE=1024
export CA_EXPIRE=3650
export KEY_EXPIRE=3650
export KEY_COUNTRY="US"
export KEY_PROVINCE="CA"
export KEY_CITY="City"
export KEY_ORG="domain name"
export KEY_EMAIL="emailaddress@domain"

Note the export KEY_DIR. This is important. You will get warnings about running ./clean-all. This will delete ALL your keys.

After editing the vars file, we need to execute it to store the values in memory, then clean the keys directory. Do so by running:

. vars
./clean-all

Yes, you read that right, period, space, vars.

Now we are going to generate keys for the server and two clients.

For the server, we just need to run a couple of quick and easy commands.

./build-ca
./build-dh
./build-key-server server

The last command will build a server.key file. This is needed when running the server for key exchanges and such.

Now there are 3 different ways to build keys for clients.
./build-key client (no password protection, not recommended)
./build-key-pass client (with password protection, recommended)
./build-key-pkcs12 client (PKCS #12 format, good for Android)

For the client configuration. I’m not sure if you can use the PKCS #12 format. I haven’t tried, but if it works for you, please let me know.

Now we need to edit /etc/openvpn/openvpn.conf for our network setup. Most of the config files are self explanatory. Here is my example:

cd /etc/openvpn #yes, you do need this for some damn reason
local localIP
proto udp
port 1194
comp-lzo
verb 3
log-append /var/log/openvpn.log
dev tun0
persist-tun
persist-key
server 172.16.1.0 255.255.255.0
ifconfig-pool-persist /var/log/ipp.txt
client-to-client
push "route 10.0.0.0 255.255.255.0"
push "dhcp-option DNS 10.0.0.1"
push "dhcp-option DOMAIN domain.tld"
push "redirect-gateway def1"
keepalive 10 120
cipher BF-CBC
ca keys/ca.crt
dh keys/dh1024.pem
key keys/server.key
user nobody
group nobody
status /var/log/openvpn-status.log

Be sure to change localIP to the server’s IP address AND (if applicable) forward UDP port 1194 to the server.

NOTE: There is one issue I have run into. By using the option push “redirect-gateway def1” does seem to work fine and redirect all through the VPN, I have an issue getting the DNS and DOMAIN to work through both the OpenVPN software or my Android. This means that all DNS queries do not appear to be going through the VPN. This may not be the case. I have yet to setup a packet sniffer to check. So for the time being, I simply created a bash script that will edit my /etc/resolv.conf file when I start the VPN, and revert it back when done. If someone knows of a really easy way to check without having to use a sniffer, please let me know.

Now that all of the keys are built, and the openvpn.conf file is setup, we are ready to start the server. While I have run into some strange behavior in my configuration, you may have better luck in yours. In mine, I had to create the device tun edit ip_forward and manually configure the IP tables.

Here is my simple script I run on the server what I want to have the OpenVPN server up and running (yes, I do this at boot). Explanation of items below.

mkdir /dev/net
mknod /dev/net/tun c 10 200
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -I FORWARD -i tun0 -o eth0 -s 172.16.1.0/24 -d 10.0.0.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -I FORWARD -i tun0 -o eth0 -s 172.16.1.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -I FORWARD -i eth0 -o eth0 -s 10.0.0.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -I FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -I POSTROUTING -o eth0 -s 172.16.1.0/24 -j MASQUERADE
iptables -t nat -I POSTROUTING -o eth0 -s 10.0.0.0/24 -j MASQUERADE
openvpn --config /etc/openvpn/openvpn.conf --cert /etc/openvpn/keys/server.crt &

Most places I have found this stuff are not very specific about IPs. So let me give you a quick rundown on each item.

First we create the device with some special settings. That is the mkdir /dev/net (if /dev/net already exists, it will do nothing), then mknod /dev/net/tun c 10 200. Then set ip_forward to true. The fun part is with the iptables.

So in my example, tun0 is the virtual device that is the VPN and eth0 is my ethernet. 172.16.1.0/24 is the IP range I’m giving to the VPN (tun0), and my physical network is 10.0.0.0/24. You can leave the VPN network on the 172.16.1.0/24 network, simply adjust the 10.0.0.0/24 to your networking configuration (ie 192.168.0.0/24). How all those iptables work… yea, I’m not going into it. They work, I’m fine with that.

After running those commands, your OpenVPN server should be up and running. The final process is background so you get your terminal back. Wait a few seconds and hit enter again. If you don’t see the process has ended, then you have done everything correctly. If it did error, check /var/log/openvpn.log for information on what is causing the problem.

Client Configuration

Now that the server is setup, lets get the client side going. This part will be for the OpenVPN software running on Linux. See the next section for CyanogenMod’s Android.

This part is much easier than the server setup, but you need to get your keys to the client. I highly recommend you do with via scp. You will need ca.crt, client.crt, and client.key. Assuming you called your keys “client”. Put these files in /etc/openvpn/keys. Then create the file /etc/openvpn/openvpn.conf and put this in it.

remote IP/DNS 1194
proto udp
dev tun  
cd /etc/openvpn/
ca keys/ca.crt
cert keys/client.crt
key keys/client.key
client
ns-cert-type server
keepalive 10 120
comp-lzo 
user nobody
group nobody
persist-key
persist-tun
status /var/log/openvpn-status.log

Change IP/DNS to the IP or DNS name your server is reachable at. You should now be able to connect to your OpenVPN server by typing:

openvpn --config /etc/openvpn/openvpn.conf

That’s pretty much it. Once you get a handle on the settings, it is actually pretty easy. However, as mentioned before. I have found a possible issue with DNS. I would highly recommend editing /etc/resolv.conf to point to your DNS server. In my example, the DNS server is also at the gateway (10.0.0.1). You can script this. In fact, use my script.

#!/bin/bash
pid=`pgrep openvpn`
if [ -z "$pid" ]; then
echo "Starting OpenVPN Client"
cp /etc/resolv.conf /etc/resolv.conf.backup
echo "nameserver 10.0.0.1" > /etc/resolv.conf
openvpn --config /etc/openvpn/openvpn.conf &
else
echo "Stopping OpenVPN Client"
mv /etc/resolv.conf.backup /etc/resolv.conf
kill $pid
fi

Pretty strait forward if I do say so myself. You may have an issue if you have a passphrase on your key! If you are having an issue, remove the ampersand (&) from the end of the openvpn –config line. This will not background the process, but you can do it manually by typing ctrl+z then bg which will background the process.

CyanogenMod’s Android Configuration

Because I don’t run the Android that came with my phone, I can use OpenVPN with ease. If you are not running a custom rom, you can still run OpenVPN by getting the client software from the Android Market (now called the Play Store). The following instructions are for CyanogenMod 7.2, but should work in newer versions just fine.

Remember when you made your client key? Well you need to make one that works great with Android. It’s the PKCS #12 format. This will give you a file that ends in a .p12 extension. Copy this file over to the root of your sdcard.

Install the certificate by going to Settings->Location & Security->Install from SD card (under Credential storage at the bottom on the menu). It should find the file and ask for the password to unlock it. Then it will ask for a new password (you can use the same one as before) and you can also give it a custom name.

Build the client by going to Settings->Wireless & Networks->VPN Settings->Add VPN. You just need to select the OpenVPN type. In the new menu there are several settings.

VPN name (this can be anything you want)
Set VPN server (the IP or domain name of the server)
User authentication (leave unchecked)
Set CA certificate (click this and select the key you just installed)
Set user certificate (same as above)
DNS search domains (these are optional, but you can set 10.0.0.1 like in the bash script above)

Hit the menu button then Advanced.

Server port (default is 1194)
Protocol to use (udp is default)
Device to use (tun, which is fine)
LZO compression (check it!)
Redirect gateway (check it!)
Remote Sets Addresses (Should also be checked)

Everything below that I left as default. You do NOT need to enable TLS-Auth. For this type of setup it is unnecessary.

Hit back, then save. From here you should be able to connect to your VPN. Note that in my tests, the VPN is much slower. I’m not sure if it is something I have done wrong in my setup, or if my provider throttles VPNs.

Conclusion

Everything should be up and running now. I hope you found this useful. Please feel free to leave a comment below. If you have any suggestions or questions you can drop those below as well. I’m not an expert on OpenVPN, I just like learning.

Sources:
http://openvpn.net/index.php/open-source/documentation/miscellaneous/77-rsa-key-management.html
http://openvpn.net/index.php/open-source/documentation/howto.html
http://blog.johnford.org/openvpn-tunnel-to-home-server/

]]>
https://blog.tangorangers.com/2012/11/quick-and-dirty-guide-to-openvpn-on-slackware-linux-and-android/feed/ 0