Setting up OpenSSH keys, to avoid having to type a password when logging in to remote systems, is pretty straight forward. In brief:
Generate public and private keys (just hit enter when prompted for a password):
ssh-keygen -t rsa
Then copy the keys to the hosts you want to be able to log in to:
ssh-copy-id hostname
You can pass the -b option to ssh-keygen and specify the number of bits you want to use. The default is 2048. (A lot of articles I've read go with 4096 bits. But then I've also read it doesn't make a whole lot of difference, as RSA is basically secure for the foreseeable future.)
And as an aside, I've actually started going with the -t ed25519 option, which is a newer algorithm. It's not supported everywhere though, so creating fallback RSA keys might be a good idea still.
Also, instead of using ssh-copy-id, you could of course manually edit the ~/.ssh/authorized keys files on remote systems. This is what I used to naively do in fact, but I see no reason not to use the ssh-copy-id command at this point.
If you do password protect your private key, when you attempt to log in to a remote machine, the difference will be that rather than typing in your password on the remote system, you will need to type a password to decrypt your private key. For me, this sorta defeats the purpose since I like not having to type a password (I do realize ssh keys are better from a security standpoint). So enter ssh-agent and ssh-add. With these programs you'll only need to type a password once per session.
Assuming you use bash:
eval $(ssh-agent)
ssh-add ~/.ssh/id_rsa
Even better though, from my viewpoint, is the keychain program. With keychain you should only have to type your password once in between boots. Keychain will start ssh-agent if it's not already running, or connect to a running instance of ssh-agent if it is running as well as add any keys you specify.
In short, I've added this to my bash login scripts:
eval $(/usr/bin/keychain --eval /home/glen/.ssh/id_rsa)
So now, when it comes to SSH keys with passwords I'm a happy campers.
And like always, this blog is mainly for writing practice and personal reference, but if anyone happens to get some use out of it, cheers.
Life in the Tubes
Glen's blog on computing and other interests.
Tuesday, January 22, 2019
Wednesday, October 5, 2016
Windows Server 2016 Updates
I installed a demo version of Windows Server 2016 over the weekend, and I was going through my usual drill of setting things up. I believe I was in the midst of installing Anaconda, a Python distribution for Windows, when I decided to grab a snack. Perhaps 5 minutes later I come back, and "uh-oh...what's this? 30%, don't turn off my computer???" Windows had decided to reboot all on its own to apply updates. Right in the middle of a software installation I was performing.
Apparently Server 2016 is going to be just as moronic by default as Windows 10. I just can't see how in the world that Microsoft thinks letting a restart happen automatically after updates is a good thing on a server operating system. A bit of research and sure, it all seems tweakable, but in a very non-sane way. Some options are in the settings menu, some options you can tweak via group policy, other options you need to download a tool from Microsoft. Can the options not all be in one place? Argh! Can they at least have some sane defaults???
Or maybe I'm missing something regarding updates making them intuitive and easy to deal with, but it sure doesn't seem so. At this point it just boggles my mind how much of a nightmare it all is.
And yes, I am aware of WSUS, but that sure seems like overkill for a standalone server such as the one I'm currently experimenting with.
Apparently Server 2016 is going to be just as moronic by default as Windows 10. I just can't see how in the world that Microsoft thinks letting a restart happen automatically after updates is a good thing on a server operating system. A bit of research and sure, it all seems tweakable, but in a very non-sane way. Some options are in the settings menu, some options you can tweak via group policy, other options you need to download a tool from Microsoft. Can the options not all be in one place? Argh! Can they at least have some sane defaults???
Or maybe I'm missing something regarding updates making them intuitive and easy to deal with, but it sure doesn't seem so. At this point it just boggles my mind how much of a nightmare it all is.
And yes, I am aware of WSUS, but that sure seems like overkill for a standalone server such as the one I'm currently experimenting with.
Wednesday, September 7, 2016
Gnome's Nautilus
I mostly stick to a Bash shell for my workflow. I just find things to be faster that way. Ha, or it gives me the illusion of faster at the very least? Anyhow, every once in a while I do find myself using a GUI file manager. Viewing images would be one great example, as skimming through a bunch of thumbnails vs. any alternative method definitely makes the most sense.
Now given that I mostly run Fedora Linux and that I have a tendency to go with the default of Gnome for my desktop environment, if I'm using a file manager it's going to be Nautilus. And most of the time Nautilus suits me just fine. It has a nice simple interface that at the same time is smartly designed enough to allow me to quickly navigate and manipulate my files.
There are two problems I have with Nautilus. First, they got rid of the delete option on the context menu. Or maybe there's some dconf voodoo I could perform to get it back, but that road is madness. I live with shift-delete for now.
The second problem for me, there are no options on the context menu anymore to create new files. At least by default. Or maybe my memory is fuzzy after 24 releases of Fedora and the options were never there. Right. The workaround is to populate the 'Templates' directory in your home directory.
Any file you put into Templates will show up Nautilus's 'New Document' menu, which is a submenu of the context menu. And when you create a new file using this menu, anything contained in a Templates file will be copied over to your new file.
So for example, if you want to be able to create a new empty text file, simply put an empty file in Templates and name it 'Text Document.txt'. Want a new Python script option? Put a file in Templates called 'Python Script.py' with a shebang up top. Now when you create a new python script the shebang will be at the top of your new script. Anything that's in a template file gets copied over to the new file.
A few things to note about template files. Binary files are certainly an option here. Also, file extensions won't show up in the 'New Document' menu, but will show up in your newly created file's name. And if a file of the same name as the one you're trying to create exists already, don't worry, a number, starting at 2, will be appended to the new file's name.
Yep, Nautilus is a great application with only a few tiny warts from my perspective. And now I won't forget about templates ever again, ha, the whole reason I started this post. Cheers.
Now given that I mostly run Fedora Linux and that I have a tendency to go with the default of Gnome for my desktop environment, if I'm using a file manager it's going to be Nautilus. And most of the time Nautilus suits me just fine. It has a nice simple interface that at the same time is smartly designed enough to allow me to quickly navigate and manipulate my files.
There are two problems I have with Nautilus. First, they got rid of the delete option on the context menu. Or maybe there's some dconf voodoo I could perform to get it back, but that road is madness. I live with shift-delete for now.
The second problem for me, there are no options on the context menu anymore to create new files. At least by default. Or maybe my memory is fuzzy after 24 releases of Fedora and the options were never there. Right. The workaround is to populate the 'Templates' directory in your home directory.
Any file you put into Templates will show up Nautilus's 'New Document' menu, which is a submenu of the context menu. And when you create a new file using this menu, anything contained in a Templates file will be copied over to your new file.
So for example, if you want to be able to create a new empty text file, simply put an empty file in Templates and name it 'Text Document.txt'. Want a new Python script option? Put a file in Templates called 'Python Script.py' with a shebang up top. Now when you create a new python script the shebang will be at the top of your new script. Anything that's in a template file gets copied over to the new file.
A few things to note about template files. Binary files are certainly an option here. Also, file extensions won't show up in the 'New Document' menu, but will show up in your newly created file's name. And if a file of the same name as the one you're trying to create exists already, don't worry, a number, starting at 2, will be appended to the new file's name.
Yep, Nautilus is a great application with only a few tiny warts from my perspective. And now I won't forget about templates ever again, ha, the whole reason I started this post. Cheers.
Saturday, August 6, 2016
Notes on Git
Git is a distributed version control system developed by Linus Torvalds. It's basically the weapon of choice when it comes to version control these days. So after using Subversion for the longest time, I made the jump to Git and haven't looked back. I do tend to forget things though if I go a while in between uses and/or if OS upgrades have happened in between. Yikes, it's been a year or so since I've pushed anything to GitHub. In my defense, I have been and still am relying heavily on Syncthing to handle my data. Anyhow, as of yesterday, I'm back up to speed with Git for my purposes.
Here's a few things I might find my future self forgetting again.
If I'm a dummy and forgot to save my ssh keys:
On your initial push to GitHub do:
In the past when I accidentally tracked files, I'd imagine I've probably just copied something, did a git rm, and then moved whatever it was back. But there's a better way. If you've accidentally added a file to version control, but you don't want to delete it and simply take it out of version control:
To check what files are currently under version control:
Here's a few things I might find my future self forgetting again.
If I'm a dummy and forgot to save my ssh keys:
ssh-keygen -t rsa -b 4096 -C 'foo@bar.com'If I want to put a local repository on GitHub, the first step is to create an empty repository on GitHub, and then, at the top level of my local repository, run:
git remote add origin git@github.com:AssumeACanOpener/some_project.gitBe sure to go with ssh and not https, unless you like typing usernames and passwords all the time.
On your initial push to GitHub do:
git push -u origin masterOtherwise a pull is going to say you're not up to date, even though you are really.
In the past when I accidentally tracked files, I'd imagine I've probably just copied something, did a git rm, and then moved whatever it was back. But there's a better way. If you've accidentally added a file to version control, but you don't want to delete it and simply take it out of version control:
git rm --cached file.txtPut file names you don't want to track into a .gitignore file at the top level of your repository. Funnily enough, you need to add .gitignore to the .gitignore file.
To check what files are currently under version control:
git ls-tree -r master --name-onlyIf you've forked something from github and still want to follow upstream changes, you'll need to add the upstream repository to your fork. For example:
git remote add upstream https://github.com/gregmalcolm/python_koans.gitThen, to update:
git pull upstream masterAnd if I ever forget the init, add, status, and commit commands, I need to pack it up and go home.
Wednesday, July 6, 2016
Fedora 24 and Nouveau drivers
My laptop uses Nvidia Optimus for video. This means the GPU built in to the CPU is for for 2D, and the Nvidia adapter is used for 3D. The idea I'm assuming is to save power. Great, who doesn't want their laptop battery to last longer?
In Linux land though, Optimus is problematic. The proprietary Nvidia drivers don't support it (licensing issues are getting in the way unfortunately). Given that's the case, I've been sticking to Nouveau, the open source driver for Nvidia cards that comes stock with Fedora. I haven't been able to play any 3d games on my laptop because of this, but do I have a Windows machine sitting around somewheres if I really need to scratch that itch.
Fedora 24 and Nouveau drivers turned out to be a whole different ball game however. After installing F24, I noticed my laptop was running really really hot. Lm sensors, a hardware monitoring tool for Linux, told me my idle CPUs were sitting at a little under 70 degrees Celsius. Yikes! Not good.
Bumblebee to the rescue! Bumblebee is an open source Linux project that lets you switch between the integrated GPU and the Nvidia adapter. With Bumblebee, by default graphics are handled by the integrated chip. You can manually run applications with the "optirun" or the "primusrun" commands though and then graphics will be handled by the more powerful Nvidia chip. Bumblebee supports the proprietary Nvidia drivers as well, so 3d games are not a problem.
So why have I been sticking with Nouveau? Well, setting up Bumblebee in the past has been a bit involved and didn't always turn out well. But it seems some Fedora folks have been busy making it all very simple. Following this guide I had Bumblebee installed and working in a matter of minutes. A quick reboot, and my CPU temperatures went down to under 50 degrees Celsius. So sure, things started out a bit rocky with Optimus, but give it time and the open source community does not disappoint. Looks like I will not be running Nouveau drivers again any time soon.
In Linux land though, Optimus is problematic. The proprietary Nvidia drivers don't support it (licensing issues are getting in the way unfortunately). Given that's the case, I've been sticking to Nouveau, the open source driver for Nvidia cards that comes stock with Fedora. I haven't been able to play any 3d games on my laptop because of this, but do I have a Windows machine sitting around somewheres if I really need to scratch that itch.
Fedora 24 and Nouveau drivers turned out to be a whole different ball game however. After installing F24, I noticed my laptop was running really really hot. Lm sensors, a hardware monitoring tool for Linux, told me my idle CPUs were sitting at a little under 70 degrees Celsius. Yikes! Not good.
Bumblebee to the rescue! Bumblebee is an open source Linux project that lets you switch between the integrated GPU and the Nvidia adapter. With Bumblebee, by default graphics are handled by the integrated chip. You can manually run applications with the "optirun" or the "primusrun" commands though and then graphics will be handled by the more powerful Nvidia chip. Bumblebee supports the proprietary Nvidia drivers as well, so 3d games are not a problem.
So why have I been sticking with Nouveau? Well, setting up Bumblebee in the past has been a bit involved and didn't always turn out well. But it seems some Fedora folks have been busy making it all very simple. Following this guide I had Bumblebee installed and working in a matter of minutes. A quick reboot, and my CPU temperatures went down to under 50 degrees Celsius. So sure, things started out a bit rocky with Optimus, but give it time and the open source community does not disappoint. Looks like I will not be running Nouveau drivers again any time soon.
Sunday, July 3, 2016
Upgrading Fedora
I recently upgraded my laptop to Fedora 24 and took notes about the process. I'm collecting them here, mostly for future reference, but also for anybody else out there that maybe happens to stumble upon them.
First things first, before upgrading I make sure to do the following:
Why all the manual backing up of things? Well, for anything 'important' I mainly rely on Syncthing to distribute things around my network, and then rsnapshot to throw things onto a back up drive. But it definitely doesn't hurt to have a few extra copies of important stuff lying around. Everything else? Well, I'm a digital packrat and there's no way I'd be able to keep everything (not without spending a ton of money anyhow). I'm keeping enough mooc videos I'll probably never watch as it is. Upgrades always motivate me (well, make it necessary perhaps) to organize and cull both old and new data, so I do. I like to think it's a good thing.
I never have trusted the Fedora upgrade process. I've heard a lot of bad things, so I always perform a fresh install. Maybe this just means I'm making more work for myself? Anyhow, post upgrade, I did the following:
And that's how it's done on my end. I'm not saying this is what everyone should do, but it's what I do to get myself back up and running and productive again. If you're reading this, maybe you'll pick up some tricks? Or maybe just satisfy a bit of your curiosity. Cheers.
First things first, before upgrading I make sure to do the following:
- Push any git commits that need to be pushed. This includes dot files if I've changed them (haha, argh! Just got burned by this one actually).
- Make sure Syncthing is properly syncing the files I have under its control, and that everything is up to date everywhere.
- Go through ~/Applications and see if there's anything I want to save, and if so manually back things up.
- Manually back up my documents and projects.
- Manually back up any videos or other large files I happen to want to save
Why all the manual backing up of things? Well, for anything 'important' I mainly rely on Syncthing to distribute things around my network, and then rsnapshot to throw things onto a back up drive. But it definitely doesn't hurt to have a few extra copies of important stuff lying around. Everything else? Well, I'm a digital packrat and there's no way I'd be able to keep everything (not without spending a ton of money anyhow). I'm keeping enough mooc videos I'll probably never watch as it is. Upgrades always motivate me (well, make it necessary perhaps) to organize and cull both old and new data, so I do. I like to think it's a good thing.
I never have trusted the Fedora upgrade process. I've heard a lot of bad things, so I always perform a fresh install. Maybe this just means I'm making more work for myself? Anyhow, post upgrade, I did the following:
- Add the following to /etc/dnf/dnf.conf: "fastestmirror=True".
- Run "dnf update", and reboot.
- Change the desktop and lock screen backgrounds (I may do some of these while updates are running of course), which is in Gnome settings, under background.
- Turn off the terminal bell. Found on the terminal edit menu, under profile preferences.
- Disable screen lock. Found in Gnome settings, under privacy.
- Clone my dot files from git and copy them to where they need to be.
- dnf install hexchat p7zip vim-enhanced gnome-tweak-tool
- Enable Firefox sync.
- Configure my Gnome favorites.
- Import rpmfusion gpg keys: "gpg --keyserver pgp.mit.edu --recv-keys (ID)". IDs and more info are found on the site.
- Install rpmfusion free and non-free repos. The "localinstall" option isn't a thing anymore. Just download the rpms and run "dnf install rpmfusion.foo.rpm".
- Install fonts: "dnf install freetype-freeworld".
- Install media codecs: "dnf install gstreamer-plugins-bad gstreamer-plugins-bad-free gstreamer-plugins-bad-nonfree gstreamer-plugins-good-extras gstreamer-plugins-ugly gstreamer1-plugins-bad-free-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free-fluidsynth gstreamer1-plugins-bad-freeworld gstreamer1-plugins-base-tools gstreamer1-plugins-entrans gstreamer1-plugins-fc gstreamer1-plugins-good-extras gstreamer-ffmpeg ffmpeg-libs ffmpeg x264 x264-libs h264enc lame lame-libs lame-mp3x twolame mpg123-plugins-extras mpg123 faad2 gstreamer1-libav"
- With Gnome Tweak tool enable the global dark theme, enable the date on the top bar, set font antialiasing to "rgba", set font hinting to "none", and set the window focus mode to "mouse".
- Add the following to /etc/X11/Xresources: "Xft.lcdfilter: lcddefault".
- Install whatever it is I'm working on at the moment, in this case: "dnf install octave qtoctave python2-matplotlib python3-matplotlib python-ipython-notebook python3-ipython-notebook python2-pandas python3-pandas python2-numpy python3-numpy python2-scikit-learn python3-scikit-learn python2-statsmodels python3-statsmodels"
- Install and set up Syncthing so I have my projects and data back.
- And finally save a list of available packages for future reference, so: dnf list available >& /root/available_packages
And that's how it's done on my end. I'm not saying this is what everyone should do, but it's what I do to get myself back up and running and productive again. If you're reading this, maybe you'll pick up some tricks? Or maybe just satisfy a bit of your curiosity. Cheers.
Saturday, July 2, 2016
Booting Fedora to RAM
So I installed Fedora 24 yesterday. And while not the only option, the default installation CD is a live CD that boots up a Gnome desktop, and from there gives you the option to either just play around with Fedora or to run the installer application and install Fedora to your hard disk.
As an aside, I chose the Gnome live CD because that will install Gnome. If another particular environment happens to be your thing, then you'll need to choose the appropriate live CD. In Fedora land these are called spins, and you can find out more about them here.
And as another aside, while I don't always do it, a lot of times upon a new release of Fedora I'll force myself to switch up from Gnome to something different, until the next Fedora release or until I tire of it. I suppose KDE might be the only environment I've actually stuck with for a full release cycle besides Gnome (I miss the more polished, integrated, and full featured applications too much I guess), but still, switching it up every now and again helps me not to stagnate I like to think. Speaking of which, I should probably give xmonad (a tiling window manager) or the like a try for once. Maybe I will this time around.
Anyhow, given my wealth of ram, it makes sense to me to, if at all possible, load my live CDs into RAM when I boot them. This is easy to do with Fedora, though not as simple as I'd like it to be, namely it being a menu item. But still pretty easy. When presented with the initial live CD menu, hit the 'e' key to edit the menu entry. Then append the kernel line with 'rd.live.ram'. Hit ctrl-x to boot and you're set. It will take a bit longer to boot of course as everything needs to be loaded into ram.
So what's the point? If you're simply wanting to give a particular desktop a go, loading things into RAM will make everything much more responsive and way closer to to what the true desktop experience would be like. But even if you're just performing an install, if you're like me you might need to fire up a terminal and do some low level pre-install or post-install tasks. Or you might just want to bring up a browser or the like while you wait for the installation to finish. Booting to RAM makes this so much more pleasant.
And a bit of trivia. Though it wasn't exactly a live CD, older versions of Solaris were the first perhaps, the first install CDs I ever encountered anyway, where you could use a browser while you were waiting for the installer. Quite neat back in the day.
As an aside, I chose the Gnome live CD because that will install Gnome. If another particular environment happens to be your thing, then you'll need to choose the appropriate live CD. In Fedora land these are called spins, and you can find out more about them here.
And as another aside, while I don't always do it, a lot of times upon a new release of Fedora I'll force myself to switch up from Gnome to something different, until the next Fedora release or until I tire of it. I suppose KDE might be the only environment I've actually stuck with for a full release cycle besides Gnome (I miss the more polished, integrated, and full featured applications too much I guess), but still, switching it up every now and again helps me not to stagnate I like to think. Speaking of which, I should probably give xmonad (a tiling window manager) or the like a try for once. Maybe I will this time around.
Anyhow, given my wealth of ram, it makes sense to me to, if at all possible, load my live CDs into RAM when I boot them. This is easy to do with Fedora, though not as simple as I'd like it to be, namely it being a menu item. But still pretty easy. When presented with the initial live CD menu, hit the 'e' key to edit the menu entry. Then append the kernel line with 'rd.live.ram'. Hit ctrl-x to boot and you're set. It will take a bit longer to boot of course as everything needs to be loaded into ram.
So what's the point? If you're simply wanting to give a particular desktop a go, loading things into RAM will make everything much more responsive and way closer to to what the true desktop experience would be like. But even if you're just performing an install, if you're like me you might need to fire up a terminal and do some low level pre-install or post-install tasks. Or you might just want to bring up a browser or the like while you wait for the installation to finish. Booting to RAM makes this so much more pleasant.
And a bit of trivia. Though it wasn't exactly a live CD, older versions of Solaris were the first perhaps, the first install CDs I ever encountered anyway, where you could use a browser while you were waiting for the installer. Quite neat back in the day.
Subscribe to:
Posts (Atom)