Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Tuesday, October 7, 2008

SLES10 update and SSL certificate problem

Have you ever needed to update some remote SLES10 system from your local update server (e.g. YUP server)? There may be many reasons for such situation. For example, the remote system can have unstable Internet connectivity to connect to the Novell servers or no connectivity at all with ability to see your local update server via VPN network only. You are able to imagine other situations, of course.

Let's suppose our update server is reachable from the remote locality via HTTPS protocol at URL https://update.domain.tld/path/. The update source is of YUM type and we want to update the system with the zypper command. At first, we need to subscribe to the update server. If the update server SSL certificate is subscribed by some well-known certification authority, then you don't have to worry. You can use the following command to add the update server to the update sources:
zypper subscribe https://update.domain.tld/path/update update
But if you generated your own certification authority or self-subscribed server certificate, then you may notice these errors:
Curl error for 'https://update.domain.tld/path/repodata/repomd.xml':
Error code:
Error message: SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
The message is comprehensible and it says that the server certificate is untrusted and can't be verified by the known CA certificates. Simply said, the server certificate is subscribed by your untrusted certificate or it is self-signed. The message only warns you that there may be an attempt of man in the middle attack.

The curl application uses a CA bundle to verify server certificates. The bundle is typically stored in the /usr/share/curl/curl-ca-bundle.crt file. If you want to make your own CA certificate valid, then concat its PEM content to the end of the file like this:
cat ca.crt >> /usr/share/curl/curl-ca-bundle.crt
After this command, everything will begin to work and the update server URL will be added to the update sources.Then, the update may start:
zypper update
I didn't mention that you will have a similar problem if you use the rug command. If I apply the previous steps the rug command will produce an error about SSL certificate verification failure anyway. I suspect that rug doesn't use curl to access the update server. So, does anybody know how to resolve it in case of rug usage?

Thursday, April 17, 2008

Fighting phishing plague with ClamAV

It's about one year and one month since long time awaited version of ClamAV 0.90 was released. The latest available version today is 0.92.1. The new version incorporates a lot of bug fixes, changes in configuration syntax, scripted updates and many other enhancements.

In my opinion, the most important one is an implementation of anti-phishing engine with help of Google Summer of Code 2006 program. It supports more generic methods how to identify phishing emails based on searching and comparing faked and real URLs in their bodies. The engine is based on heuristic analysis supported with special signatures. Through the next releases of versions 0.91 and 0.92 the engine was further improving. Finally, it was enabled by default in the release 0.91. If you are more interested in the releases you can check them in the clamav-announce mailing list.

So to protect your mail communication from phishing plague you only need to update to the latest version of ClamAV. But that's not everything. Sometimes, you can require to turn it off for purposes of e.g., testing false positives or to configure it more thoroughly. Everything like this is available in the configuration file /etc/clamd.conf in the form of these options (default values are specified here):
  • PhishingSignatures yes
    • try to detect phishing messages via signatures
  • PhishingScanURLs yes
    • scan URLs in the messages for heuristic analysis
  • PhishingRestrictedScan yes
    • anti-phishing engine works only with domains listed in the .pdb database otherwise scanning all of the domains may increase the false positive rate
  • PhishingAlwaysBlockSSLMismatch no
    • always block SSLmismatches, it is false positive raiser
  • PhishingAlwaysBlockCloak no
    • always block cloaked URLs, if it is enabled it seems to lead to the increase of false positives
More on these options you can find in the configuration file which is well annotated or in the related man page of ClamAV (e.g. here).

Was there any way how to deal with phishing messages before the release of ClamAV 0.90? Yes, it was, it is still here and it is good practice to combine it with ClamAV featuring anti-phishing. Before ClamAV 0.90 you could use third party signature files which contain definitions of phishing mails. The best known project doing this and more is the Sanesecurity project where you can download such signatures and feed your ClamAV with them. The ClamAV default place for signatures in the filesystem is the /var/lib/clam directory where you can download the files. After that, you need to restart clamd service (or similar).

Beside this, you can download from their web page scam signatures which can help you to get rid of spams based on MIME attachments like PDF documents, JPEG images and so on. I'm using it for a long period and it is really efficient. And you don't need to waste your time with SpamAssassin rules tuning. By the way, I don't know any which are so effective. To check the content of the signature files use the sigtool tool like this:
  • sigtool -l /var/lib/clamav/scam.ndb
  • sigtool -l /var/lib/clamav/phish.ndb
You can find there many interesting information. The part of signatures' names is well documented and you can break them down with help of this web page.

I don't have to forget to mention that the more comfortable approach is to install into your system the update script which can download the signatures for you automatically via cron service. The script is placed at the web page in the usage section (or directly here).

In the end, we have two weapons how to fight against phishing. Sanesecurity signatures seems to be more robust and mature while the ClamAV anti-phishing engine is too young to be as accurate as Sanesecurity. But it uses heuristics approach which means to be more flexible and dynamic and don't have drawbacks of static signatures like zero-day attacks. So the best practice is to join their power and use them together. If you go through the ClamAV tests at Sanesecurity web page, by the way they aren't up to date, you will find out the previous sentence about quality is true. The ClamAV isn't as perfect so far but in many cases it catches mails which are invisible to Sanesecurity signatures.

Wednesday, April 2, 2008

Spamassassin, spamd and -L switch

I have been using Spamassassin in the company as our primary spam message blocker without troubles for years. The version used today is 3.2.3. It is boosted with a set of custom rules and with the rules from SARE project, with malware rules and others. It is also very useful to check for updated rules with sa-update command line utility (available in the newer versions of Spamassassin). Overall, the spams are identified quite precisely with very small false positives. The users have available simple web interface where they are able to check their caught messages.

But a week or two before, I had to begin solving a strange problem with some of them. A few users began complaining that they are receiving from 5 to 10 spams a day. That's not bad score because these users or their mail addresses are very popular among spammers and in total each of them would receive about from 2 to 5 thousands of unsolicited messages without spam filter.

So, where to begin? I took a suspicious message and tried to find out why it is evaluated as a regular message. The message is below (some unimportant parts were removed):

Return-Path: this1905@indygov.org
Received: from [187.83.34.110] (helo=rxexv)
by 168-215-192-214.dslindiana.com with smtp (Exim 4.62 (FreeBSD))
id 1KþxNR-0005Tq-Km; Wed, 2 Apr 2008 03:27:21 -0400
Message-ID: 47f334bf.3040802@indygov.org
Date: Wed, 2 Apr 2008 03:24:47 -0400
From: this1905@indygov.org
Subject: Gotcha! All Fool!

Wise Men Have Learned More from Fools... http://80.98.114.98


At first glance, it is clear to consider the message to be a spam. Included URL is placed at URIBL and SURBL blacklists. Further, the message was sent via blacklisted relay by SPAMCOP. So what's wrong?!? Let's check its score and applied rules simply with Spamassassin client:

spamc < message

I am using the spamc client because our spam filter is implemented with it. That means that in the background the spamd daemon should be running and the spam filter is communicating with it via spamc on demand. The previous command produced an output containing the following lines:

Content preview: Wise Men Have Learned More from Fools... http://80.98.114.98
[...]
Content analysis details: (4.3 points, 4.5 required)

pts rule name description
--------------------------------------------------
3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100%
[score: 1.0000]
0.6 FH_HELO_EQ_D_D_D_D Helo is d-d-d-d
0.0 NORMAL_HTTP_TO_IP URI: Uses a dotted-decimal IP address in URL
0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS
0.1 AWL AWL: From: address is in the auto white-list


It seems like some rules are ignored. I'm sure that I have turned on DNS RBL checks and URIDNSBL plugin but the related rules are missing here. Debugging of spamd daemon didn't show anything. All rules were parsed and loaded successfully. To turn the debugging on restart the spamd daemon with -D option switch (or place it into the init script or into the sysconfig configuration file).

Another way how to check the rules is to bypass the spamd daemon and run the Spamassassin directly like this:

spamassassin -t < message

If you want to run it with debugging use the -D option switch. I was quite surprised when the direct checking shown me a different result:

Content preview: Wise Men Have Learned More from Fools... http://80.98.114.98
[...]
Content analysis details: (7.2 points, 4.5 required)

pts rule name description
--------------------------------------------------
3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100%
[score: 1.0000]
0.0 FH_HELO_EQ_D_D_D_D Helo is d-d-d-d
0.0 NORMAL_HTTP_TO_IP URI: Uses a dotted-decimal IP address in URL
2.0 URIBL_BLACK Contains an URL listed in the URIBL blacklist
[URIs: 80.98.114.98]
1.5 URIBL_JP_SURBL Contains an URL listed in the JP SURBL blocklist
[URIs: 80.98.114.98]
2.0 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
[Blocked - see ]
0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS
-1.8 AWL AWL: From: address is in the auto white-list

The mentioned missing rules - URIBL_BLACK,URIBL_JP_SURBL ... - are suddenly here. Why? What is the difference between spamc client and spamassin command? It shouldn't be any but their behaviour can be controlled with many option switches (command line arguments). I ran the spamc without any but spamd daemon may run with some. To check them we should display the spamd process and its command line arguments with e.g. ps command:

ps -A -o cmd | grep spamd

The command will display something like this:

/usr/sbin/spamd -L -d -c -m 8 -u spamass -x -s local4 -r /var/run/spamd.pid

Let's go through the arguments and discuss their purpose:

  • -L - used to turn off any DNS and network tests
  • -d - demonize the process
  • -c - create user preferences files if they don't exist
  • -m - maximum number of children to spawn from the parent process
  • -u - run the process under the specified user
  • -x - disable user config files
  • -s - syslog facility for logging events
  • -r - write the process id to the file

There are many others options. If you want to know them check the spamd man page (e.g. here). In summary, only the first option -L is available for the spamassassin command. We can check it with:

spamassassin --help

Now it should be everything bright. Different scores were caused by the -L switch which changed the applied rules. Its result is that RBL checks are disabled and in conclusion the corresponding rules are not applied. Why I used the switch before I'm not pretty sure. Perhaps, I hit some performance constraints because it take some time to perform the network tests.

What's the result? Don't forget to check the command line arguments of the spamd daemon. They may disable some Spamassassin features despite of their definitions in configuration files.

Monday, February 25, 2008

Monitoring ASSP with monit

Do you know ASSP or Anti-Spam SMTP Proxy? I'm going to write some details about it in the near future. If you have deployed it on your servers to eliminate spams already, I will show you how to monitor it with the monit and restart it in case of a failure. The configuration was tested on Linux.

At first, I had to edit the init script of the service to be able to check its pid. The init script after the changes is below and the added lines are bolded:

#!/bin/sh -e
PATH=/bin:/usr/bin:/sbin:/usr/sbin

case "$1" in

start)
echo "Starting the Anti-Spam SMTP Proxy"
cd /usr/share/assp
perl assp.pl
ps ax | grep "perl assp.pl" | grep -v grep | awk '{ print $1 }' > /var/run/assp.pid
;;

stop)
echo "Stopping the Anti-Spam SMTP Proxy"
kill -9 `ps ax | grep "perl assp.pl" | grep -v grep | awk '{ print $1 }'`
rm -f /var/run/assp.pid
;;

restart)
$0 stop || true
$0 start
;;

*)
echo "Usage: /etc/init.d/assp {start|stop|restart}"
exit 1
;;

esac
exit 0

I know, there are many other ways how to do it better how to be compliant with the distro but I just want to show you how to configure the monit service. The monit service depends on it and it is used to define the service check block.

The assp service is listening at the TCP port 55555 by default which provides a simple configuration interface over HTTP protocol. The interface is authenticated so if you try to access it without proper authentication it will return a status code 401. It means client's authentication failure. You can get the whole error message via telneting to the port:

telnet localhost 55555
GET / HTTP/1.0



I used the HTTP protocol in version 1.0 and sent a GET request. If you want to use the version 1.1 you need to send the Host header as well. After pressing Enter and sending an empty line the request is processed and the following message is replied:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Basic realm="Anti-Spam SMTP Proxy (ASSP) Configuration"
Content-type: text/html

Server: ASSP/1.2.6()

Date: Mon, 25 Feb 2008 13:45:53 GMT

Content-Length: 49


...


We are going to be interested in the first line which contains already mentioned error code. The snippet of monit configuration code which monitors our service and the related process via pid file looks like:

check process assp with pidfile /var/run/assp.pid
start program = "/etc/init.d/assp start"
stop program = "/etc/init.d/assp stop"

It checks a pid of the process and if it is not running the service will be restarted. Now, we will extend it with the ability to check the connectivity to the port 55555:

check process assp with pidfile /var/run/assp.pid
start program = "/etc/init.d/assp start"
stop program = "/etc/init.d/assp stop"
if failed host 127.0.0.1 port 55555
then restart

But we would like to talk to the port with HTTP protocol. The above line is simple connectivity check over TCP protocol. Better is to do it via HTTP. The monit service support it and you can do it like:

check process assp with pidfile /var/run/assp.pid
start program = "/etc/init.d/assp start"
stop program = "/etc/init.d/assp stop"
if failed host 127.0.0.1 port 55555 protocol http
then restart

The above line is not the right one for us because it is suitable for unauthenticated environments. By default, it checks the return code only and it will be successful if it receive OK status or return code 200. To catch the return code 401 we need to redefine what we are going to expect. If you use a send/expect mechanism you need to omit "protocol http":

check process assp with pidfile /var/run/assp.pid
start program = "/etc/init.d/assp start"
stop program = "/etc/init.d/assp stop"
if failed host 127.0.0.1 port 55555
send "GET / HTTP/1.0\r\nHost: localhost\r\n\r\n"
expect "HTTP/[0-9\.]{3} 401 .*Unauthorized.*"
then restart


So, we construct the whole GET request and we expect the error code 401. If we receive anything else the monit service evaluates it as a connectivity failure and restarts the assp service. To be more fault tolerant it's better to check it twice, three times or more times to be really sure the service is not listening at the port 55555:

check process assp with pidfile /var/run/assp.pid
start program = "/etc/init.d/assp start"
stop program = "/etc/init.d/assp stop"
if failed host 127.0.0.1 port 55555
send "GET / HTTP/1.0\r\nHost: localhost\r\n\r\n"
expect "HTTP/[0-9\.]{3} 401 .*Unauthorized.*"
for 3 cycles then restart

That's everything. Why are we doing it like this? If the assp service is running the configuration interface should be accessible via TCP port 55555. Otherwise something is wrong and we should restart the service for sure.

Friday, January 4, 2008

CentOS 4.6, Amavisd-new 2.5.3 update and troubles with Perl

Yesterday, I had a little trouble with one of my mail servers based on CentOS distribution enhanced with amavisd-new package from Dag Wieers's repository. The system had configured automated update and during the night the latest version of amavisd-new package was installed. It was 2.5.3 I think. Naturally, it led to the service restart and rereading required system libraries and dependent Perl modules.

The perl-MIME-tools package seemed to be the most critical part. It is responsible for disassembling MIME parts of messages containing multimedia attachments and it is used by amavisd-new content filter. I noticed that behaviour when the following error messages were appearing it the mail log:
  • ... amavis[28144]: (28144-01) (!!)TROUBLE in check_mail: mime_decode-1 FAILED: Can't locate object method "seek" via package "File::Temp" at /usr/lib/perl5/vendor_perl/5.8.5/MIME/Parser.pm line 816 ...
I thought something strange had to happen to the File::Temp module. Perhaps its integrity could be broken. So I began checking the module! I was looking through the system update history but I didn't find anything updated recently. The module is part of the perl package and it wasn't updated recently nor there weren't any updates of it since the system's installation. Finally, I prove it by verifying the metadata of the package with command:
  • rpm -V perl
It showed me the package wasn't broken and any file wasn't modified or corrupted. The File::Temp module was in version 0.14. You can check it like this:
  • perl -le 'use File::Temp; print File::Temp->VERSION'
Further, I went through the update log again and noticed the package perl-MIME-tools was updated a few weeks ago as well. Its new version was 5.424.
The next phase was about trying to google through mailing lists to find anything related to the problem.

I found out a few people had the similar issues with the amavisd-new filter but nobody was sure how to solve it. The most denoted advice was to reinstall the File::Temp module.

I was deciding if I should use a latest version from the CPAN repository but Dag's repository contains standalone perl-File-Temp package with the module so I downloaded it. The package was in conflict with the perl package. The conflicting part was man page of the module. You don't have to install it or clearly you can bypass installing documentation of the package. Use this command to achieve it:
  • rpm -Uvh --nodocs perl-File-Temp-0.20-1.el4.rf.noarch.rpm
I installed the module in version 0.20, restarted the service and nothing changed. The error was still there. So what next? The File::Temp module should be O.K., the perl package wasn't corrupted.

Let's try to check the MIME::Parser module which the error message mentioned as well. I installed the newest version from the CPAN which was 5.425. According to the changelog of the package this version solves some compatibility issues with tmp_recycling() method of the module. I restarted the service and it seemed to be working. The problem dismissed and the mail log was clean in the end.

To be sure with the previous steps I searched with Google once more. But now, I focused on misbehaviour between the File::Temp and MIME::Parser modules. I found this interesting article which mentions some incompatibilities between them and that new version of the File::Temp module is solving it.

The mail server and its content filter is healthy now. The troubles with Perl are away.

Wednesday, November 28, 2007

SANS Top-20 2007 released

Finally, yesterday was released annual report of SAN top 20 security risks for the current year. The report is published here and you can read some commentaries about it at www.news.com and at security.blogs.techtarget.com. I must point out that there aren't many changes compared to the previous year 2006.

At the client-side dominates vulnerabilities of web browsers and office suites and at the server-side vulnerabilities of web applications and operating systems services. In summary, according to SANS client-side vulnerabilities have rising tendency and clients may threaten their companies by careless web browsing. Default configurations of many operating systems are still weak and web applications vulnerabilities account for almost half of all of them.