Saturday, January 29, 2011

Convert SCSI Tape Auto-Loader to iSCSI Target

Hi All. We're currently in the process of re-evaluating our server backup process. Currently we're using Backup Exec 11d to backup all our server as though they are all physical servers, performing Full/incremental jobs directly to our Adic Faststor 2 Autoloader.

What we're looking at is upgrading to Backup Exec 12.5, and moving to a backup-to-disk-to-tape solution where we have an online copy of the data on disk, and offload to tape at the end of the month for off-site storage.

As part of this, we want to virtualize our backup server. Using an iSCSI target as the location for our online backup is easy enough, but I'm having trouble getting info about iSCSI solutions for the tape offload. As a way to try and keep costs down, I was curious if we'd be able to use a SCSI-to-iSCSI bridge to convert our existing tape autoloader until a later date when we can get additional budget. i.e. Would something like this (http://www.quantum.com/Products/Connectivity/Index.aspx) work for us?

Has anyone else tried to use a bridge in this type of scenario?

  • I haven't used a bridge, but I have done this using the ietd software iscsi target stack, and a patch to allow raw IO. This may be a viable option for you

    Notes on this are available at www.wlug.org.nz/iSCSINotes. Key points are to export both the loader and the tape device as separate LUNs under the same target.

    There's more on the actual patch process, including some additional notes about a newer patch required for newer kernel / ietd versions at www.wlug.org.nz/XenNotes.

    dyasny : is the same possible using tgtd instead of ietd?
    Daniel Lawson : dyasny: I haven't tried myself, but I have heard that it does not work. (There's a few howto type pages that suggest it does, but in a vague enough manner that I suspect the authors just assumed it would, as none of them actually go into any detail regarding tape)

Outlook Anywhere Credentials on SBS08

I'm running SBS08 and have some clients using Outlook Anywhere (combination of Outlook 2003/2007) to access the Exchange Server.

The users are currently prompted for their domain usernames/password every time they start up Outlook (there is no "Save this password" prompt). Is there a way to configure the server or clients so that the user credentials are somehow cached/remembered?

    1. The user must be logged into the PC with an Active Directory account
    2. The primary mailbox that they are accessing must be the same one as the account they are logged in as.

    If both of the above are not true, then you will have to enter a password to access Outlook. However, if the above is true and you still have problems, more issues may be at play.

    taspeotis : Outlook Anywhere is RPC over HTTP. See my answer as to how you can save the password.
    Wesley 'Nonapeptide' : I know Outlook Anywhere is RPC over HTTP, but in my experience the user you log in as and the mailbox you attempt to connect to will cause this symptom of the credentials being asked for.
    taspeotis : The question isn't why this happens, it's "Is there a way to configure the server or clients so that the user credentials are somehow cached/remembered?". I don't see where you answered that question.
    Wesley 'Nonapeptide' : Ah ha. I see the nuance now. I suppose I looked at the problem a bit deeper. Instead of answering how to cache credentials, I tried to address why the authentication dialog box popped up in the first place.
    Brad Leach : Thanks Wesley. The computers which I am running Outlook Anywhere are not connected to the domain.
    Wesley 'Nonapeptide' : If your PCs were on the domain and users were logging in via domain credentials, then Basic or NTLM credentials would not matter and Outlook would open without asking for credentials. However, I now realize that my answer isn't quite addressing your specific question.
  • Use NTLM authentication then edit this registry value. Users can then save the password.

    Brad Leach : Thanks! I will look into this solution.
    From taspeotis

Administrator password reset in Windows Server 2008

I lost my password for Windows Sever 2008. Does anyone know how to reset the admin password?

  • From another admin account net user UserName NewPassword

    Or you can use one of the many bootable NTFS password reset utilities

    This assumes that the machine is in a workgroup and not a domain.

    From MarkM
  • You can reset or recover your password in a number of ways. The three most prominent that come to my mind are:

    1. Boot into a Linux Live CD and replace the magnify.exe tool with cmd.exe. Then reboot into Windows and reset the password after using the Accessibility tools that are available at the login screen. More information here: http://blogs.thecodearchitects.com/?p=196
    2. Use a password recovery tool like John the Ripper's Windows CD to brute force reveal the existing password: http://www.openwall.com/passwords/microsoft-windows-nt-2000-xp-2003-vista
    3. Use a pasword reset utility such as the ones found on the Ultimate Boot CD. Be warned, however, that utilities which attempt to replace passwords from a boot environment have been known to cause more problems than they are worth.
  • Edit: never mind, i like the magnify.exe switcheroo trick. Although i'm particularly partial to this one if the target has a firewire port

    If you know your directory services restore mode password, you can that to change the administrator password, this password is supplied at install time.

    Press F8 during the textmode boot phase and select Directory Services Restore Mode.

    Failing that, if you're on a domain, you'll probably have to rebuild the pdc and active directory domain.

    If that is the case, consider its a lesson the hard way.

    1. Can't stress enough the importance of regular backups, and testing those backups semi-regularly. We all know its a PITA but it will save a much bigger pain. I'd also avoid ntbackup.
    2. Document everything. Update it when you change anything. It's all about dotting your i's and crossing your t's.
    From fenix

Linux distribution for VPN router with VLANs support

I need to build a router with following requirements:

  1. 2 physical interfaces (WAN and LAN)
  2. Should be able to handle several VPN tunnels
  3. VPN tunnel should be routed to certain VLAN
  4. Web-based GUI

What Linux distribution can I use as start? Or maybe there is a distribution that fits my needs already?

  • Well, from personal experience I can recommend Vyatta, which is a Linux based Router, Firewall, VPN and can do much more. Vyatta also sells the hardware and software.

    But my personal favorite is pfSense. pfSense is a free, open source customized distribution of FreeBSD tailored for use as a firewall, VPN appliance and router<taken from the pfSense site>.

    You did not say anything regarding the expected load on the system, will the system be for a large company, small business, or personal. You could buy a server from a vendor with no OS, or use a desktop. If this is for a small business I had good luck with the ALIX system boards.

    nedm : +1 for pfSense - you can fire up multiple OpenVPN server instances (we use one for each remote office and route accordingly) or use the PPTP or IPSec VPN servers that are also built in. Meets all the OP's requirements, is easy and intuitive to install and set up, has a decently large community of users and best of all is free.
    3dinfluence : I just installed a ALIX based pfSense firewall at my church where I help out with IT. So far it's worked out better than the Cisco PIX 501 for a few reasons. PIX is EOL'ed by Cisco....don't have to worry about with an open project. Cisco's VPN client has limited OS support these days as they are trying to push people to their SSL solution. And it's much easier to manage which is a plus in a team of volunteers that have a range of skill levels when it comes to dealing with networking.

I need to rewrite https://domain.com => https://www.domain.com because of wildcard SSL

Hey

Like the subject says I need to rewrite https://domain.com => https://www.domain.com. But I have a wildcard SSL setup for the domain and the root domain does not match *.domain.com, thus the browser brings up an error

domain.com uses an invalid security certificate.

The certificate is only valid for *.domain.com

This is my current vhost config

<VirtualHost 127.0.0.1:443>
        ServerAdmin user@domain.com
        DocumentRoot /usr/local/app/domain/webapps/www
        JkMount /* somestuff
        ServerName domain. com
        ServerAlias www.domain.com 
        ErrorLog logs/domain.com-error_log
        CustomLog logs/domain.com-access_log combined
        Customlog logs/domain.com-deflate_log deflate
        RewriteEngine on
        RewriteCond %{HTTP_HOST}   ^domain\.com [NC]
        RewriteRule ^/?(.*)         https://www.domain.com/$1 [L,R,NE]
        SSLEngine on
        SSLCertificateFile /etc/httpd/conf/ssl.crt/x.domain.com.crt
        SSLCertificateKeyFile /etc/httpd/conf/ssl.key/x.domain.com.key

</VirtualHost>

I was hoping that the RewriteEngine would kick in before the SSL is loaded but doesn't work. Is this solvable without getting a new cert that is just for the root domain ?

  • Unfortunately the name that the client is talking to is checked against the certificate by the client, not the server. As far as the client is concerned it is talking to domain.com not <something>.domain.com - it will be unaware of any URL rewriting that is being done at the server end.

    So you will need an extra certificate for the other name to avoid certificate errors.

    serverninja : This is correct. SSL negotiation will always happen first.
    Olaf : One could argue that it's rather "fortunate" that the client checks itself, not unfortunate. If the redirect could happen prior to SSL negotiation, traffic would obviously not be encrypted and a man in the middle could redirect the client anywhere they want to. This extra certificate usually costs money, which is unfortunate :)
    Olaf : Also, you might be able to get both, the wildcard and the toplevel domain in just one certificate - see the answers to http://serverfault.com/questions/87005/can-i-buy-just-one-ssl-cert-for-a-subdomain. I've not tried that, but this way you could work with just one certificate (and on the same IP address) instead of requiring an extra IP address just for the top level address.
    David Spillett : I agre, but it is unfortunate for the person asking the question as it impacts what is trying to do (serve both addresses so redirection works smoothly but with one certificate). Security is a GoodThing(tm), unfortunately being secure isn't always convinient.
  • The host headers are (normally) not visible without terminating the SSL connection, so the certificate needs to be valid for whatever the client is entering... you could rewrite http://domain.com to https://www.domain.com though (why the heck did the comment code remove the www. when it made the examples into an url for? ^^)

Replication sync failing; Publisher out of Identity ranges.

I have Merge Replication set up with a SQL 2005 Publisher/Distributor and roughly 100 SQL 2005 Express Subscribers. Everything was working fine for months and now all of a sudden everyone is getting the below errors.

I have been Googling around but to no avail. Can anyone offer some insight? I even tried deleting a user's Subscription. I also tried running -->

sp_adjustpublisheridentityrange @publication='MyDB'

Anyway, here are the errors -->

Error messages:
The Publisher failed to allocate a new set of identity ranges for the subscription. This can occur when a Publisher or a republishing Subscriber has run out of identity ranges to allocate to its own Subscribers or when an identity column data type does not support an additional identity range allocation. If a republishing Subscriber has run out of identity ranges, synchronize the republishing Subscriber to obtain more identity ranges before restarting the synchronization. If a Publisher runs out of identit (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199417)
Get help: http://help/MSSQL_REPL-2147199417
Not enough range available to allocate a new range for a subscriber. (Source: MSSQLServer, Error number: 20668)
Get help: http://help/20668
Failed to allocate new identity range. (Source: MSSQLServer, Error number: 21197)
Get help: http://help/21197
  • First, are your machines patched with at least Service Pack 3? This error was often thrown by a bug fixed by Service Pack 2 Cumulative Update 4. That may be a place to start.

    If you are all patched up, I'd next check the data type of your identity columns. If they are currently INTs, for example, and the publisher is trying to allocate a range that exceeds the maximum INT value (2,147,483,647), you would get that error. You could resolve it by changing your identity field to a BIGINT. With 100 subscribers, your publisher has had to allocate a large number of ranges, so this could be likely.

    Hope this helps.

    Refracted Paladin : Are there any known issues with upgrading systems currently in a Replication Topology to Service Pack 3 from 2? Thanks.
    Brian Knight : I've not seen any. I've upgraded systems using all three types of replication from SP2 to SP3 and have had no issues at all. Of course, I always recommend testing in a QA environment first ;). Here's the link to the KB article for that fix: http://support.microsoft.com/default.aspx/kb/941989

SBS 2003 R2 install errors

I just installed sbs2003 R2 as a learning lab box on my home network. The install seemed to take quit a while with a bit of shuffling between disks 4 and 5. The continue setup checklist showed the red x besides the exchange install and finally gave me an error about the "schema needing extending run, forest /adprep" and told me at the end that the install did not succeed. After a longer than normal reboot it comes up and seems to be OK.

What logs do i need to check to see what went wrong?

Should i reinstall and try burning the iso's to new media?

cheers

  • Sounds like its stopped somewhere at the end of the install (while populating the exchange database). Being a learning lab box, it shouldn't be too much or a worry.

    I build a 2k3 SBS R2 playing box last week and didn't run into any errors like that. Check the event logs (Application and System), they might explain something. If all else fails, try reinstalling again. It is only a learning box, and I think I re-installed mine countless times over the years trying new things.

    "Should i reinstall and try burning the iso's to new media?" Being the SBS series was never released as a downloadable ISO (like all there other os's) than you could also have a bad ISO. Might I suggest trying to play in a Virtual Machine environment? You can then mount the ISO's instead of needing to burn 5 CD's VMWare Server is free, and so is ESXi.

    Hope that sort of answers your question in a round about way!

    piagetblix : "Being the SBS series was never released as a downloadable ISO (like all there other os's)" The ones I am using came from Technet where they are available as ISO's. Don't really want to use a VM yet(im using a basic dell desktop pc and i know esxi will not boot from it) I may eventually get there when i can put together a more dedicated system. I have reburned the disks at 1x and verified and will give the reload another shot. Thanks!
    Rod : Technet, of course! Dont they have a DVD copy of SBS R2 on there? would save burning 5 cd's. ESXi 4 is very strict on the HCL side of things, but 3.5 is more flexible. I've got 2 ESXi 3.5 machines in old P4-3Gh HT's with 2/4G ram and only 80gb sata's. Off the original subject and question a bit. We'll see how you go with the reinstall... let me kow how you go, I'll be interest to know if it happens again.
    piagetblix : @Rod Did the re-burn of the disks and the reinstall went good, no errors. I did not install exchange this time though. Perhaps I will add it later. I'm planning on jumping into the esxi realm soon, just waiting for a buddy who said he can get me an old hp rack server with scsi disks for free!! And no technet does not have a dvd! only cd images...beats me I guess they figure you'll use server 2008 and haven't bothered to update the iso's. cheers!
    From Rod

Robots.txt command

I have a bunch of files at www.example.com/A/B/C/NAME (A,B,C change around, NAME is static) and I basically want to add a command in robots.txt so crawlers don't follow any such links that have NAME at the end.

What's the best command to use in robots.txt for this?

  • I see you cross-posted this on Stack Overflow, but I'll put my answer here as well.

    You cannot glob in the Disallow line unfortunately, so no wildcards. You would need to have a disallow line for each directory you want to exclude.

    User-agent: *
    Disallow: /A/B/C/NAME/
    Disallow: /D/E/F/NAME/
    

    It's unfortunate, but the standards is very simplistic and this is how it needs to be done. Also note you must have the trailing / in your disallow line. Here is a fairly good reference for using robots.txt.

    From palehorse
  • To my knowledge there is no pattern matching routine supported by the robots.txt file parsers. In this case you would need to list each of those files with their own Disallow statement.

    Keep in mind that listing those files in the robots.txt file will give out a list of those links to anyone who might want to see what you're trying to "hide" from the crawlers, so there may be a security issue if this is sensitive material.

    If these links are in HTML served up by your server you can also add a rel="nofollow" to the A tags to those links and it will prevent most crawlers from following the links.

  • As previously mentioned, the robots.txt spec is pretty simple. However, one thing that I've done is create a dynamic script (PHP, Python, whatever) that's simply named "robots.txt" and have it smartly generate the expected, simple structure using the more intelligent logic of the script. You can walk subdirectories, use regular expressions, etc.

    You might have to tweak your web server a bit so it executes "robots.txt" as a script rather than just serving up the file contents. Alternatively, you can have a script run via a cron job that regenerates your robots.txt once a night (or however often it needs updating)

    From
  •     User-agent: googlebot
        Disallow: /*NAME
    
        User-Agent: slurp
        Disallow: /*NAME
    
    palehorse : Globbing is not allowed in the file so the * does not work. The only reason it works for User-agent is due to the fact that it is handled specifically different in that line.
  • You cannot glob in the Disallow line unfortunately, so no wildcards. You would need to have a disallow line for each directory you want to exclude.

    User-agent: *
    Disallow: /A/B/C/NAME/
    Disallow: /D/E/F/NAME/
    

    It's unfortunate, but the standards is very simplistic and this is how it needs to be done. Also note you must have the trailing / in your disallow line. Here is a fairly good reference for using robots.txt.

    From palehorse
  • it cannot be done. there is no official standard for robots.txt, it's really just a convention that various web-crawlers are trying to respect and correctly interpret.

    However Googlebot supports wildcards, so you could have section like this:

    User-agent: Googlebot
    Disallow: /*NAME
    

    since most web-crawlers won't interpret wildcards correctly and who knows how they interpret it, it's probably safe to isolate this rule just for googlebot but I would assume that by now every large search engine could support it as well as whatever Google does in search becomes de-facto standard.

  • Best documentation I've seen for this is at robotstxt.org.

How can I use the same key for SSH and SSL (https)

Hello all,

I'm trying to install the development tools for a small team, and I can't get the authentication right.

Since we are a distributed team, the server is on the internet. And I'd like to have SSO+zero client configuration.

So basically git over https+webdav is impractical, because the git client can only use basic auth but doesn't save the password and some IDE plugin don't even forward the password question in their UI.

I have to use git over ssh then. I installed gitosis and it basically works with asymmetric keys, ok. I'll have to ask each dev to install their key, I can do that, forget zero configuration.

Then I want the developers to access the web tools (wiki, tickets, etc.) that are on https, but I this time I have to give them either a login/password or another private key just because the formats aren't compatible between SSH and SSL and the place to store it on the OS is not the same. Now, I have to forget the SSO ?

Have I just been sent to hell or I am mistaken ?

Thanks in advance for your insights.

  • You're pretty much out of luck - SSH keys and SSL certificates are different animals and as far as I know they aren't interchangeable.

    Your best bet is probably to configure single sign-on / shared password store / whatever for your web tools & leave git/gitosis as an authentication island.

    From voretaq7
  • OpenSSH has experimental support for x509 certificates here:

    http://roumenpetrov.info/openssh

    You could issue a single x509 certificate per user and use them for both.

    instead of putting the user pubkey in their authorized_keys, you can specify the allowed DNs of the user certificates; and you must configure the webserver/web application so that the DN is translated to a username.

    b0fh : You mean installing the patched version of openssh ? it may be already shipped by your distribution (I know that at least Gentoo does). There is no point in using the same RSA key for both applications but with a different format - you still have to set up the ssh public key of each user by hand. OTOH, with x.509 keys, you could keep your CA separate, and adding new users to SSH or HTTPS can be done without knowledge of their public key, you only need to pick a consistent DN policy...
    From b0fh
  • TL;DR summary: If you have a SSL/X.509 certificate+key, just give the private key file to ssh. Or, if you already have a SSH key in id_rsa, just use it with OpenSSL when signing a CSR. That's all.


    Let's assume you have an user's SSL certificate in joeuser.pem and its private key in joeuser.key.

    Since X.509 uses standard RSA keys, and so does SSH, you should be able to just tell your SSH client to use joeuser.key -- the only requirement is that it be in an understandable format.

    Look at the insides of joeuser.key and check if it looks kinda like this:

    -----BEGIN RSA PRIVATE KEY-----
    MGECAQACEQCxQaFwijLYlXTOlwqnSW9PAgMBAAECEETwgqpzhX0IVhUa0OK0tgkC
    CQDXPo7HDY3axQIJANLRsrFxClMDAghaZp7GwU2T1QIIMlVMo57Ihz8CCFSoKo3F
    2L/2
    -----END RSA PRIVATE KEY-----

    In OpenSSL, this format is called "PEM" (as in -outform pem) and is used by default. The same format is used by OpenSSH, and you can use ssh -i joeuser.key to connect.

    You can extract the public key in OpenSSH id_rsa.pub format (for putting into authorized_keys) with:

    ssh-keygen -y -f joeuser.key > joeuser-ssh.pub

    (The same public key in PEM format can be extracted with openssl rsa -pubout, but it will be of little use.)


    If you have a DSA key, it should work exactly the same like RSA.

    grawity : nraynaud: They are _developers_. If they cannot install a X.509 cert to their favourite browser (at least by following TFM), it's already scary.
    grawity : ...anyway. For NSS-based browsers (Firefox, Mozilla, Epiphany) there's a set of command-line tools to modify `cert.db`. For Windows, certificates can be installed using certutil or (I think) through AD group policy. SSH requires no configuration at all, just `ssh-keygen -y -f` and dump both files to user's homedir.
    From grawity

How do I securely execute commands as root via a web control panel?

I would like to build a very simple PHP based web based control panel to add and remove users to/from and add and remove sections to/from nginx config files on my linode vps (Ubuntu 8.04 LTS).

What is the most secure way of executing commands as root based on input from a web based control panel?

I am loathe to run PHP as root (even if behind an IP tables firewall) for the obvious reasons.

Suggestions welcome. It must be possible as several commercial (and bloated, for my needs) control panels offer similar functionality.

Thanks

  • Whatever you do, it will always be a possible security hole.

    Some suggestions:

    • Write a simple shell script that executes its input, and have it chown and setuid root; PHP will call it and pass it the supplied command.
    • Use more specific scripts for the various tasks you will perform, and have them setuid root; again, PHP will call them.
    • Write a demon which accepts commands on a TCP socket and executes them, and have it run as root; PHP will connect to it.
    • Anything else based on the concept "have something else on the system that can do what you want as root and have PHP call it".

    Nothing of the above seems actually safer (and definitely not simpler) than just having your "control panel" run as root. And most "control panel" packages (such as webmin) just bypass this entirely and run as root.

    From Massimo
  • write a cgi script in python, i think it's probably easier. Although it is so much safer, as Massimo said, to get a webmin running...

    From PirosB3
  • Create a sudo rule for the user the web server runs as, so it can only run specific commands. To edit a file, for example, you could have the web server make a copy in a directory owned and only writeable by the webserver (so malicious local users can't step on your changes mid-process), and have a sudo rule to copy the edited file into place. You can lock the sudo rules down so that only those commands with specific arguments can be handled.

    Also, ensure that you're authenticating users, and ensure that you sanitize any input from the user to stop any shell metacharacters or similar from sneaking in. When adding users, for example, you might verify that the input fits within a maximum length and consists of only letters and number. Using sudo would prevent most things like that anyway, but multiple layers of protection are good. It's not possible to be too paranoid with user input. ;)

    Or just install Webmin. :)

    defraagh : +1 for the sudo rules, -1 for suggesting webmin (which does precisely what OP is trying to avoid, run with fool root privileges).
    dannysauer : Webmin runs as root, but authenticates the user and runs commands as either root or a specified user. Coincidentally, sudo is setuid root and authenticates the user, allowing certain users to do certain things as either root or another user. From a security perspective, they're very nearly the same thing with a slightly different interface.
    From dannysauer
  • There is cPannel and WebMin that do this, they are also notoriously insecure. The consequences from a hack is also great, you loose your entire system and you will have to reinstall from scratch when you are hacked.

    Just like you don't want to use telnet, you don't want to use http. Make sure you use HTTPS, and buy a real certificate, after all throwing your root password over the net is a serious mistake and you want to make sure its going to the right server.

    EDIT: You could run cPannel in a chroot, so even if it was compromised you could just make a new chroot. Its also a padded prison that lets you define exactly what root has access to.

    Gnudiff : or create your own certificates?
    Rook : No go with a real one. Its only $30 and the whole point is that you want to make sure your typing your ROOT password into your server and not a "Man In the Middle" (MITM).
    JPerkSter : Isn't the only difference is that one is validated by a company and one's not?
    Rook : If you buy it then a browser will be able to tell you that you are in fact talking to your server. If you don't buy one, the browser will always throw an error when you visit your server and it will throw the same error when you give your root password to a hacker conducting a MITM attack. There isn't much point in self-singed certs.
    From Rook
  • I think you could combine these items to achieve a good level of security:

    • Run PHP as a specific user either via fast-cgi, cgi-bin, or phpsuexec
    • Consider using the Hardened PHP project http://www.hardened-php.net/
    • As suggested, use sudo to get the root level access you require
    • If possible, using SELinux here could give you very good security, but it can be tedious to deploy

    Also, though I prefer the approach above, I had a client how simply dumped actions to a file and then had a script process these actions. The script ran via cron every 5 minutes.

    Is public access required for this? If not, use IPtables and apache's own Authconfig to protect it from abuse.

    Rook : I almost gave you a -1 for suggesting hardened-php. That project is awesome, but its not designed for this. The whole point is that he wants to give PHP **LOTS** of privileges, not take it away.
    jeffatrackaid : It all depends on how you plan to actually execute the system commands. If you call a system function then having the added protections of the project would be significant. I've seen it in use in several cases precisely for this reason and it works very well. Hardened PHP + Sudo can be a powerful combination.
    pobk : PHP and sudo? Are you mad?
    Zephyr Pellerin : Its also worth noting that SElinux will block what he's trying to do. Likewise with hardened.
  • You could symlink the files into a web-root folder, htaccess for semi-security (on top of your php auth), write a script to morph it.. then you just need to restart the nginx process when the files are changed.

    www.cyberciti.biz/faq/freebsd-configure-nginx-php-fastcgi-server/

    You could use something like the following to monitor the changes, send you an email when they change (with the changes), and restart the nginx process via a script..

    http://inotify-tools.sourceforge.net/

    Might be overkill when you could detect it via cron every minute or so..

    From Grizly
  • Idea number 1: Use puppet to direct the changes to your config files.

    If you need to edit the files, then try: Create a PHP script which edits it's own local copy. This then gets checked into a local SVN repository. Then using svn-externals, the only thing you have to do as root is to do an svn update in the nginx config files (which of course you've checked into SVN) and your set.

    The update script can run on a cron job.

    From pobk
  • Thanks all - some great suggestions. The idea of having the whole thing controlled through an SVN server seems like a good one; then I can use some of the ideas herein to check in / check out and restart nginx.

    For some reason I've lost access to the first account I posted the question with so don't seem to be able to vote up / accept answers - sorry! Not sure how to remedy.

Backup strategies for Windows Server 2003 filesystem

As I discovered recently, full filesystem backups of anything fancier than straight file storage seem to be of limited use. Examples:

  1. AD, registry, and Windows itself: restore is not hardware-independent
  2. MSSQL and pgsql servers: unless backup is made with VSS--which appears to bog the server as much as doing a hot backup of databases anyway--data is not necessarily in a usable state
  3. NTBackup-created backups cannot be restored on anything newer than Windows Server 2003

I'm guessing that if your server hardware became unusable, building a replacement machine in a single-server, 9-5 availability environment, depending on what hardware you could get, it would be desirable to have backups that are as widely compatible as possible, since you're clearly stuck building and setting up from scratch. Given that, are there any major downsides to the following backup strategy?

  1. Down SQL services
  2. 7-zip tar update of all server hard disks to an external backup file
  3. Verify integrity
  4. Up SQL services again

(The tar update is just to avoid the middle-step when restoring of having to restore the complete backup and then incremental backups one at a time.)

  • I'm not so sure that you're correct in all cases, although you do have a point of sorts. However, you need to consider the case where you are restoring to the same hardware and software base as was backed up (which most places would - or at least should - do as a matter of course anyway). The scenario would be a DR where a server has failed and you need to get it back now, rather than restoring from a legacy or historical backup (which I suspect is the one you are considering more).

    Getting the data back is trivial. Getting the OS and it's configuration back can vary from relatively trivial to decidedly non-trivial. Getting a server application and it's configuration back is nearly always non-trivial. Full backups can save you in these situations.

    What I'm saying is that any sane backup strategy should consider a lot more than just the restore procedure, but should also consider the hardware and software in the restored environment.

    Kev : Good points to consider, but what if you don't know ahead of time what the restored environment will be? Small businesses can't always afford to have an identical secondary server standing by.
    Kev : Also, are you saying that a tarfile (taken with downed services rather than using VSS) cannot just be unpacked onto the drives, assuming the same hardware?
    From mh
  • System State backups can be restored to dissimilar hardware. This can be a difficult experience but it can be done. link text

    SQL db dumps are not hardware dependent, but that doesn't get the application back.

    Assuming that tarring the disks works on identical hardware, (which I doubt unless the server is booted from a Linux boot CD and tarred from that environment) will it work if the target server is completely new, or the mb or RAID card is replaced? Without shutting the server down there won't be a restore-able backup of AD unless the System state backup is done separately.

    Can this solution be automated, can the output be verified? Can it be documented and are the steps simple enough that if you are on vacation, or have moved on to a different firm, a recovery can be performed? Is there tech support available if issues are encountered? If you are truly trying to diminish points of frustration during an emergency then all of these issues need to be considered.

    The statement about MS not supporting NTbackup in Server 2008 is incorrect, Server 2008 does provide for restoring NT Backups. link text

    An image based backup that can be restored to the same or dissimilar hardware or as a virtual machine (P2V) are some of the minimum requirements if a "fast" restore is needed. Generally this will require a 3rd party product and/or MS add-on: StorageCraft, Acronis, BackupExec, MS DPM, VMWare/Xen/HyperV, or hardware based snapshotting of VM's in a SAN along with replication. SBS 2003 has a server backup that might be considered "good enough" and all Server 2008 have image based backups.

    Kev : Thanks for the links. In particular I wish I had been able to find the second one earlier. I guess we'll stick with NTBackup for now, then. As for your questions, though, I don't think tar vs. NTBackup solves any of them. A restore on any hardware, same or different, still needs to have documented all the configuration and setup. Correct me if I'm wrong, but there's no free or built-in way to take any kind of backup that can then be plugged into a new system with empty drives, hit a button, and have the system working as it was before.
    Ed Fries : I completely agree that any backup method requires good documentation. My experience (through personal science experiments) has been that the more "alternative" the backup method is the more complex and less flexible they are to document, execute and t-shoot. One free method is using a P2V converter and then restoring as a VM. VMWare, Xen and MS all have free converters for P2V migration. Same caveats apply re. execution, automation, monitoring. I'm not aware of any built in imaging method for Server 2003.
    From Ed Fries

DNS free XP workstations and AD?

I have 70-80 kiosk-type machines with no DNS. We do this so the users of these machines cannot access internet resources not listed in the hosts file. Of course they can access things by IP address, but thats not a problem.

We will be moving to AD soon and Im not sure how to handle these machines. A few thoughts:

  1. Configure a BIND9 DNS server just for them and have it give out the proper records so the clients can find the domain controllers. Not sure if this will be problematic.

  2. Disable recursion and forwarding on the DNS server. Have the clients that need to resolve internet addresses use two DNS server. One AD and a secondary that is a caching DNS server not doing AD. (not sure if this will work, and it seems that having a non-AD DNS is a bad idea).

  3. Get one DNS server on the domain to do local only and another to do internet. I dont see how this is possible. I can disable recursion for the domain but not for individual servers.

Im leaning towards solution 1 as I think thats the only one that will work. Im not planning on doing DDNS, just putting in the proper SRV records. Im assuming this will work. Any other ideas?

  • AD requires DNS (there is no alternative option) and will work best with an AD-integrated DNS (i.e. MS dynamic DNS). You need to sit back and reconsider how you want to block sites that are not approved for your kiosk machines.

    The most obvious solution seems to me to be to just not add the kiosk machines to your AD and continue as you are. You seem to have no problems with it, and to be at least reasonably on top of things, and I don't see any requirement to add them, so why not?

    From mh
  • Option 1 is a viable approach. XP Domain members will work fine with Bind provided you populate all of the correct SRV records. Assuming you are going to use AD's own integrated DNS for the rest of your domain it shouldn't be that hard to manage however you can also do it with a non integrated MS DNS (not installed on a DC) rather than Bind but you will have to disable root hints [ as per this ].

    Option 2 wont work. Secondary DNS servers on an XP client will only get used if there is no response from the first DNS, if the first one responds with a failure that still counts as a response.

    Option 3 is covered in my response to 1.

    As a general comment using a non MS DNS is viable in a domain but it's almost always a lot more work than it's worth (IMO).

    From Helvick
  • Please don't use BIND in a Windows client environment -- its a pain in the neck, you will break things and have to spend alot of time doing things that are usually just automatic due to AD dynamic DNS. Solving this problem with DNS is using a sledgehammer to hang a painting.

    Check out the free Windows SteadyState tool. SteadyState is a free tool that is designed for folks implementing shared computers for libraries, schools and kiosks. You can set all sorts of policies, including restricting all internet access and whitelisting specific websites.

    Frenchie : BIND is actually fairly trivial to implement in a windows environment.
    Jim B : Bind is trivial to implement it's after you implement and start using the environment post bind that makes you realize it was a mistake
    Frenchie : Three years of running with 500 clients and we've not yet seen an issue? Admittedly, we only have our DCs allowed to update their records.
    duffbeer703 : Implementing isn't rocket science, but maintaining is a real pain. It's also the wrong approach to solve the problem posed in the question. Why would you take an oddball approach to your infrastructure when free, supportable and easy to use tools are available?
  • Here's an idea (as kooky as it may sound) and you'll have to test it to see if it works:

    My assumption is that you allow them to go to some web sites based on the fact that you're adding entries to the hosts file. Based on that assumption my idea goes like this:

    1. Set up AD integrated DNS on your DC.

    2. Disable recursion on the DC\DNS server.

    3. Set up conditional forwarders on the DC\DNS server for the domains that you allow the users to go to. For example, for Google set up a conditional forward for google.com to use ns1.google.com, ns2.google.com, etc.

    This will allow domains that you "authorize" (by adding conditional forwarders) to be accessed by the users and will block all other external domains. You can find which forwarders to use for each "authorized" domain by running an NS nslookup for each domain.

    This seems like the simplest solution to implement. Your internal DNS is uncomplicated, it allows you to use only a single DC\DNS infrastructure instead of trying to manage multiple DNS servers (internal resolution servers and external resolution servers), and keeps the client configuration simple by allowing you to configure each client with the same DNS settings.

    From joeqwerty
  • Active directory can do what you require by default. Set your kiosk servers to get their dns from the domain controller. In the domain controller you should have a . domain so it will only send the domain dns entries and not forward anything else. Either leave your specific addresses in the host files on each kiosk or add them as entries on the DC to make you life easier with updating. Point your machines which need full DNS access at the 2nd DC which replicates the domain dns entries but not the other entries.

    JamesRyan : In forward lookup zones create a zone called "." (set this zone to not replicate) and it will not resolve anything not specifically listed (for that server). The DNS server without a "." zone will continue to resolve everything. Because you set replication by zone, they keep all the (AD)domain stuff in sync automatically.
    JamesRyan : (untick store in AD to stop it from replicating)
    From JamesRyan
  • You could implement a proxy locking them down to a certain whitelist and allowing the other machines unfettered access, I use it myself, speeds up the net for all, frees up the router from processing so many connections and generally improves your admin life.

    The logging helps too.. know what your users have been doing, analyse trends in usage/wastage, figure out if any of them have been infected with a web-accessing-trojan.. then you only allow the proxy through the router, denying all other IP's (assuming the kiosk users can't change the IP). Save dosh in bandwidth!

    That way, no need to worry about your DNS infrastructure, easier to maintain and document!

    If you want to get really funky, squid lets you run simultaneous copies, so you can run one whole proxy just for the kiosks, and another for your clients (with different run-levels and bandwidth allocations etc), configure them via the DHCP server and the whole mess gets sorted automatically!

    http://www.squid-cache.org

    From Grizly

BackupExec 12 to Bacula questions

We have a 128 node compute cluster for environmental modeling, with a master/head node which we currently backup with a Windows2003 system running BackupExec 12 and a single HP LTO3 tape drive. We have recently ordered an Overland NEO200s 12 slot library, and are considering migrating off Windows to CentOS 5 for the backup server. The master/head node is RHEL5, with the compute nodes currently being migrated from a mix of RHEL3/4/5 to CentOS5. I'm fairly familiar with RH/Centos, but have no experience with Bacula. We've tentatively settled on Bacula as our cluster vendor recommended it. My questions are: 1) Does Bacula support an Overland NEO200s/LTO3 library? 2) Can Bacula catalog/restore tapes written by BE? and 3) I've head of Amanda, but am even more unfamiliar with it than Bacula. Any assistance would be appreciated

Dave Frandin

  • Several people have moved off BE to Bacula, some for the reasons you cite, Dave. Many Bacula users using Overland libraries. I'll check on the specific model you are looking at.

    Users have also successfully converted tapes from other systems. Easiest way is to restore the tape with BE then back it up with Bacula, using some tricks we know that allow you to make the Bacula tape have the same date/time stamp as the original backup. Our CTO is Kern Sibbald, author of Bacula and project manager.

    Unlike Amanda, even our Enterprise Edition is GPL so you can use it for free. Amanda Enterprise requires payment and a proporietary EULA.

    Who is the cluster vendor who recommended Bacula? I would like to write to him to thank him and also to congratulate him on his wise advice :-)

    Feel free to contact me: jack.griffin@baculasystems.com

    LVDave : Hi! thanks for the info! The cluster vendor is Advanced Clustering Technologies (http://www.advancedclustering.com/). It was one of their techs who set up the headnode. His name escapes me right now.. I'd heard of Bacula previously but was not familiar with it, am currently looking at it on a CentOS5 virtualbox vm, to get somewhat familiar with it. We just recieved the library the other day, so I expect this project will become top priority in the near future... Thanks again for the info... Dave

“Login failed for user ‘NT AUTHORITY\ANONYMOUS LOGON’.” to SQL Server 2005

I'm trying to migrate a legacy application we have to Windows Server 2008 x64 and IIS7. It's written in Classic ASP and connects to a SQL Server 2005 database.

However, when the page runs, I receive the error:

[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.

The connection string I'm using is: Driver=SQL Server; Server=SERVERNAME; Initial Catalog=DBNAME; I can't see any reason for it to be using the anonymous logon as when it was running on my 32-bit Win2k3 server, it accessed the SQL Server using DOMAINNAME\SERVERNAME$.

I have the following settings.

SQL Server 2005 - running in mixed mode. IIS7 Application Pool - Allow 32-bit applications set to True.

I've also added the server as a user on the SQL Server.

I've tried a few things now and I'm starting to run out of ideas.

  • Hi

    I think you are using the wrong database driver for your odbc connection. MS SQL 2005 uses the SQL Native Client.

    Driver=SQL Native Client; UID=username; PWD=password; Server=SERVERNAME; Initial Catalog=DBNAME;

    You can download the setup here: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=df0ba5aa-b4bd-4705-aa0a-b477ba72a9cb

    Liam : Right, that sorted that error out. However, now I get a new error: [Microsoft][SQL Native Client][SQL Server]Login failed for user ''. The user is not associated with a trusted SQL Server connection.
    Liam : That sorted it. Not ideal as we try to avoid having usernames and passwords in our files, but it does the job for now. Cheers.
    grub : Hi check the answer from brian. maybe you can remove the username and pw from the file if youre using the trusted connection.
    From grub
  • I believe you need to add an attribute to your connection string that will allow the application pool account in IIS7 to authenticate on the SQL Server. See below:

    Provider=SQLNCLI;Server=myServerAddress;Database=myDataBase;Trusted_Connection=yes;
    

    The Trusted_Connection piece will allow IIS to connect using the credentials of the app pool account. If that is running under the machine account, as you said, then the login you created at the SQL Server will work.

Is there any utility in windows 7 that is similar to Local Users and Groups snap-in ?

Hello.

Windows 7 Home Premium has local users and groups mmc console snap-in disabled: W7Home - snapin disabled

Is there any custom utility I can use to manage my accounts? I need no more than adding users and email addresses to have some test accounts for my development purposes.

I don't want to use regular "user accounts" tool in control panel to add users because:

  • I don't want them at my welcome screen
  • I can't assign mail address to them
  • The MMC is disabled since Home Premium does not support active diretory users and the net effect is that it does not support the Users and Groups MMC snap-in.

    After some research it is revealed that the only way to do this is using the command-line or PowerShell scripting. There is currently no third party tools simplying this funcationality for Windows Home.

    I would suggest looking at either building a Virtual Machine using Windows XP/Vista/7 Professional for this purpose, or alternatively upgrade to Windows 7 Professional.

    Janis Veinbergs : Programmatically I am able to create new users, set their properties and use them when logging in my website and they are not on my welcome screen. That's all i want - use an existing, simple tool, not to create my own.
    Bart Silverstrim : I don't think the functionality was disabled at the kernel level if it supports this functionality at the command line or scripting level. Microsoft has a history of enforcing licensing restrictions (and add sales of higher-end versions of Windows) by arbitrarily limiting settings via a flag in the registry, so it's more likely there is a flag being checked in the registry for the version of Windows and that is killing the snapin (there have been several writeups in the past on the "true" difference between Windows Server and Windows Workstation if you Google it).
    Bart Silverstrim : And no, I'm not suggesting you search for the flag and disable it to get higher functionality from Windows. A)it's against licensing and B)last I'd read Microsoft found people turning Windows Workstation into Windows Server through a single change to the registry so they added worker threads whose only job is to monitor the Registry for changes that affect the version of Windows and change it back to prevent piracy (I think there were people doing this to get around connection limits with IIS or something...it's been several years since I read about it.)
    Diago : @Bart Thanks I edited my answer and removed the kernel disable line since your right it is done at registry level.
    Bart Silverstrim : @Diago-glad to give clarification :-) Many new administrators might be surprised at how little difference there really is among Windows versions given the price and licensing differences.
    Janis Veinbergs : You should edit your answer as MMC is NOT disabled, just some snap-ins are (not all).
    From Diago
  • While you can't use the MMC, there should be a control panel applet allowing local user administration. There's always been a limited applet since XP Home.

    'nusermgr.cpl' and 'control userpasswords2' should still work I believe.

    If you're looking for a scriptable means to do this you can use 'net user' to add/modify local accounts, (I'm about 90% sure this works for the various Home flavors of Windows) and then add a registry key, via 'reg add', to hide it from the Welcome Screen (See: http://www.petri.co.il/hide_a_user_from_the_welcome_screen_in_windows_xp.htm).

    From sinping