How to Export / Backup CuteFTP Site Manager

How to backup CuteFTP
How to backup CuteFTP

Skip To Quick Solution

In the past I’ve had issues using CuteFTP’s Import / Export feature to migrate my FTP site list, in particular, loosing password information. This tutorial teaches you how to export or backup your CuteFTP Site Manager and keep your passwords and connection data intact.

Notice: Before we go any further…

This is the process i use when i need to quickly transfer my FTP connection data over to a new machine or if for some reason i need to reinstall CuteFTP (CuteFTP 7 & 8).  Whether it is the ‘right way’ I’m not sure, but it works for me. I take NO responsiblity for any mishaps you may have following this process. Just be careful and if your not sure, leave it.

For reference, here is a link to Globescape’s support page.

It’s pretty straight forward stuff to be honest, the annoying part generally being finding where the Site Manager sm.dat file is stored. sm.dat is CuteFTP’s file which stores all the connection info.

To find Site Manager path, use the menu and go to:

Tools >> Global Options >> Security

Under the Security section you should see the Site Manager Path. It should look something like this:

Cute FTP's Site Manager Path

Navigate to the location of the sm.dat file and make a copy of it.

You sould now be able to place this into a fresh CuteFTP install and use Tools >> Global Options >> Security to point at your backed up sm.dat.

Quick Solution

Read the notice before continuing:

  • Open CuteFTP
  • Menu >> Tools >> Global Options >> Security
  • Find Site Manager Path (see image above)
  • Copy sm.dat file at that path
    (Mine was: C:\Documents and Settings\Administrator\Application Data\GlobalSCAPE\CuteFTP Pro\8.1\sm.dat)
  • Paste sm.dat into the appropriate folder in the new CuteFTP install
  • Point CuteFTP to the new sm.dat
  • Done

Related Tutorials

CSS Browser Hacks

CSS Browser Hacks
CSS Browser Hacksl

Here we’re going to have a look at a few CSS browser hacks which you can use in those horrible situations where the page looks perfect in all browsers apart for one (IE6 cough!).


@import is used to link an external stylesheet from within a stylesheet/CSS. Earlier version 4 browsers (e.g. Netscape Navigator 4) do not understand this rule and therefore ignore it.

Use the @import if you need to hide styles from older version 4 browsers:

@import "mystyle.css";      /* hidden from most version 4 browsers  */
@import url('mystyle.css'); /* understood by IE4 but not NN4 */

<!–[if IE]> Conditional Statements

Internet Explorer (aka IE) have conditional statements that allow you to give instructions based on the IE and the version of Internet Explorer running.

<!--[if IE]>
This will echo if the browser is Internet Explorer
<!--[if IE 5.5]>
This will echo if the browser is Internet Explorer version 5.5
<!--[if IE 6]>
This will echo if the browser is Internet Explorer version 6

IE 6 Only Hack

If everything works in Firefox, IE7 but not in Internet Explorer 6, use the star html hack:

* html .myclass{
 /* this will only work in IE6	*/

You can also use the underscore hack but I prefer not to as this will cause validation errors in your CSS. Just for reference, here’s an example of the underscore hack:

 _margin-left:5px; /* only IE6 will process this line */

Firefox Only Hack

I came up with this when messing around with child selectors. I don’t know whether it has a name or if other people are using it but it seems to work quite nicely in the version of firefox I was running… (I haven’t tried it in other browsers but  it validates fine)

Update: its called the child hack. Opera and Safari should process this aswell.


p > .myStyle{
color:blue; /* Only Firefox runs this style */

Everything but IE6

Heres a hack that works in Firefox and IE7. Handy if you want everything but IE6 to run it. Better still, it validates as well.

html[xmlns] .myStyle{
 /* Firefox and IE7 process this but the document must be XHTML to work */

To Hack or not to Hack?…

A lot of the time CSS hacks are just a quick solution for us developers, we don’t care how it works as long as it does. However, my advice is avoid this and try to find out why there is a problem in the first place.

A little time spent looking into the problem itself and not a workaround will not only give you a far better understanding of CSS and browser iregularities but also stop you repeating the same thing the next time round.

Tutorial Name: CSS Hacks

Related Tutorials:

301 redirects using htaccess

301 redirect htaccess tutorial
301 redirect htaccess tutorial

In this tutorial we look implementing 301 redirects using htaccess and cover the following:

Before we go any further lets cover a few important points to bare in mind before starting:

  • htacccess is a configuration file for Linux Apache servers and not traditonally available on an IIS/Windows server (using Windows/IIS?)
  • The Apache Mod-Rewrite moduled must be enabled – uncomment LoadModule rewrite_module modules/ in your httpd.conf
  • RewriteEngine must be declared On in your htaccess file, we’ll come to this

What do we mean by 301?

301 is the status code returned by the server. When a 301 response is returned this means the requested resource/page has been permanently moved to a new location.

There are other redirects such as 302 or 307 redirects which are also known as temporary redirects. In all the programming languages or configuration utilities i’ve come across  302 redirect has been the default so to make it a permenant  redirect you have to explicitly defined this.

RewriteEngine on

Before we write any code we need to make sure the RewriteEngine is on. The following code does this:

RewriteEngine on

Quick & Simple redirects

Here are 2 methods of writing a simple permenant redirect in htaccess. This will redirect to Note, it doesn’t have to be a directory, it could also be old.html and new html

Redirect permanent /old


Redirect 301 /old

I find either of these two methods ideal if you need to redirect a small number of pages as they are easy to remember and straight forward to setup.

Redirects using RewriteRule

The following example uses a RewriteRule and a regular expression to map ALL the pages/files of an old domain to a new one. Note the [R=301], this sets the redirect as a 301 response.

Options +FollowSymLinks
RewriteEngine on
RewriteRule (.*)$1 [R=301,L]

Although regular expressions can be tricky the result is worth the effort – in this line of code we have correctly redirected and mapped the entire website to its new domain.


Its not always the case you need everything redirected, sometime you need to be selective in your redirectionsg. Htaccess provides this flexibility through it’s RewriteCond directive.

Here are a few examples of RewriteCond in action:

Redirecting nonwww to www

Redirects all nonwww requests to the relevant www version.

Options +FollowSymlinks
RewriteEngine on
RewriteCond %{http_host} ^ [NC]
RewriteRule ^(.*)$$1 [R=301,NC]

This rule says ‘redirect (301) all requests that are made under the nonwww to their relevant www version but ignore this if the request is under the www’.

Redirecting http to https

Redirects all http requests to the relevant https page by using port 443 as a condition.

RewriteEngine On
RewriteCond %{SERVER_PORT} !443
RewriteRule (.*)$1 [R=301]

What if I’m hosted on a Windows/IIS Server?

If you’re using an IIS / Windows server you won’t be able to use the code below to perform your redirects. Instead, you will need to make the changes directly through the IIS interface, use a code alternative or look into using an IIS module that allows you to simulate htacces. ISAPI_rewrite is a popular alternative, visit for more info.

Here is a a good 301 redirect tutorial that shows how to set this up in IIS.

Useful Resources

Robots.txt Tutorial

Robots.txt Tutorial
Robots.txt Tutorial

IMPORTANT: be very careful when playing about with your robots.txt, its very easy to block off your entire site without realising!…

Covered in this post:

What is a Robots.txt?

A Robots.txt is a file used by  webmaster  to advise spiders and bots (e.g. googlebot)  where in the website they are allowed to crawl. The robots.txt is stored in a website’s root folder, for example:

When should i use Robots.txt?

Your robots.txt should be used to help bots like Googlebot crawl your website and guide where they aren’t supposed to go.

DO NOT use a robots.txt to try and block scrapers or other heavy handed crawlers. At the end of the day it’s up to the bot whether they respect your robots.txt or not, so in all likely hood it won’t even get read by these crawlers.

Another important thing worth mentioning is that anyone can see your robots.txt. So bare this in mind where you are writing it, you don’t want to include anything like: Disallow: /my-secret-area/

How does it all work?

The best way to learn how it works is probably to look at some examples. In its simplest form the contents of a robots.txt might look like this:

User-agent: *

In this example the definitions is saying, for ALL (*) User-Agents, Disallow nothing i.e. feel free to crawl anywhere you want.

To do the opposite and block EVERYTHING you would use:

User-agent: *
Disallow: /

You can also use the ALLOW property which works in the opposite way.


User-agent: *
Allow: /


User-agent: *

Googlebot and My Robots.txt

You can specify a rule for just Googlebot by using the User-agent property.


User-agent: Googlebot
Disallow: /


User-agent: Googlebot-Image
Disallow: /

Google (and some other bots) respect the * or greedy character. This can be very helpful for blocking off areas which contain similar URL parameters. The robots example below tells Googlebot NOT to access anything with a ? in it.

User-agent: Googlebot
Disallow: /*?

Please note, in the past i’ve noticed defining a Googlebot user agent rule has caused Googlebot to completely ignore all other User-agent: * rules… I don’t know whether they have changed this yet but its worth baring it in mind.

[UPDATE] WARNING: If you have ‘User-Agent: *’ AND a ‘User-Agent:Googlebot’, Googlebot will ignore everything you defined in the * definition!!!! I don’t think many people realise this so be very careful. Remember, ALWAYS test your changes using Google Webmaster Tool’s Robots.txt Tool. If you don’t have a account for Google Webmaster Tools, GET ONE NOW!


The X-Robots-Tag HTTP header

You can also define your robots at a page level by using a robots meta tag. The first of these examples is basically the default a bot would assume if no definition was provided…

<meta content="index, follow" name="robots" /> (default - no robots tag)
<meta content="noindex, follow" name="robots" />
<meta content="index, nofollow" name="robots" />
<meta content="noindex, nofollow" name="robots" />

Please note, the NOFOLLOW in these examples should not be confused with the rel=”nofollow” of links.

Useful Tools / Resources

Enabling Short Tags in PHP

php short tag
enabling php short tag tutorial

In this tutorial we look at how you enable short tags in PHP. Short Tags or short_open_tag as it is know in PHP.ini allows you the option to use short style opening tags for PHP code blocks e.g. <? instead of <?php.

To enable this feature on your server open PHP.ini file (this should be somewhere in your PHP install folder e.g.  /bin/php/phpx.x.x/php.ini) and change the short_open_tag setting to:


Please note, it is not generally advised to use Short Tags as this can lead to future problems when migrating to server that doesnt have short_open_tag enabled. Personally I dont like this feature, it seems good at the time but trust me, there’ll be a time where it comes back to haunt you.


mySQL SELECT WHERE date = today
mySQL SELECT WHERE date = today

The Quick answer is:


I was tearing my hair out trying to figure this the other morning and its really quite simple. I thought i may have use a day month year match or possibly use PHP but thankfully the guys at MySQL included this nice DATE() function which means you don’t have to worry about the hours, minutes and seconds being different. Simples!
MySQL DATE / TIME Functions

The functions used in thie MySQL query are:

* DATE() returns the date without time
* NOW() returns the current date & time (note we’ve used the DATE() function in this query to remove the time)

For more information about MySQL Date and Time functions on the official MySQL site.