Magento – How to optimize

Magento’s cache system

Magento & Zend Framework

Before we go on with various optimization points, it is needed for everyone to understand truly and in depth the Magento two level cache mechanisms.

The Magento cache system inherit from Zend Framework‘s (ZF) one. Nothing really surprising there, Yoav has always been close to Zeev & Andy and like this Framework a lot. Simply put, Magento is an Ecommerce Framework on top of a coding Framework, a kind of “Meta Framework“.

Zend Framework caching system is described there:

Upon reading, you will quickly feel there is a lot of similarity between the ZF cache & Magento’s one. To fully understand this howto and properly setup your Magento, no need to read it by the way, the key points are explained below.

The Two Level cache mechanisms

The « two level cache » system allows to get a fast cache backend and a slow cache backend. The main problem here is that only file and database structures allow the slow backend to be really efficient because a “home made” structuring of these content can be done.

With APC or Memcached, we can get a two way associative structure but still very simple. Magento handles large data collections, with type, category, groups, etc… A simple structure can’t really fulfill the need of Magento’s cache. In order to be able to work properly, Magento need a slow backend cache that could be structured properly and have extended value/capabilities. In the fast_backend, we then only have a key (element id) and the content of this key (the element that is cached). It’s a bit like if you have a mailing system that store a mail content with an ID and another DB that help structuring it with folders and attributes. The “raw” data is stored in the fast_backend, the intelligent mapping of these data is stored in the slow_backend.

This way, Magento can selectively delete or update a key or a category without having to flush all the cache every time an update is made.

If you impose a “non configurable” slow cache backend to Magento, it won’t be able to selectively clean the part of the cache that are impacted by a manipulation, it then renders the whole caching mechanism quite useless.

Ramdisks & SSD disks

Declaring a « file » backend, whether it is for the fast or slow backend is not a problem, far from it.

Storing file on a disk is “slow”. But you can write this file on a fast medium like a SSD drive or even a faster one, the fastest of all, your server RAM, with aRAMDISK. With DDR3, hard disk drive is 100 up to 300 times slower than RAM. The RAM is also, at least, 20 times faster than the best SSD drives and far more affordable. Nowadays, most of modern servers are shipped with no less than 8 or 12 Go of RAM, let’s enjoy it!

The VFS cache for the Linux Filesystem improved greatly the performance, even if you don’t create a Ramdisk. 

These RAMDISK can store either Fast or Slow Backend and give really good results. Actually, Memcached is only useful if you wish to share your cache backend content between multiple servers. Even like this the proof that it’s more efficient than storing on a ramdisk, per server, your cache content is not that obvious. If you have only one frontend server, just don’t hesitate and go for a file cache backend and use a RAMDISK or at least don’t go for a Memcached server which is a bit more complicated to setup and handle.

Under Linux, creating a RAMDISK drive is really simple:

mkdir –p /dev/shm/cachechown –R www-data.www-data /dev/shm/cache
mount –o bind /dev/shm/cache /var/www/magento/site_test/var/cache

Of course, I advise you to put all of this in your starting files /etc/init.d so that ramdisk is always ready when your server boot. As well, don’t forget to properly shutdown those ramdisks by adding a script to /etc/rc6.d.

Of course, you can store anything on a ramdrive as if it was a *real* disk. Sessions for example, your log files and all that could use a important write activity. Beware that a ramdrive is “virtual” since if your server reboot, you will lose its content. So if you intend to store precious and durable information on it (like your logs and maybe a DB in some cases), you have to commit your files to a physical support from time to time. Every ten minutes for a log file for example and why not using a binlog synchronization toward a slave instance if you use a DB on a RAMDISK.

SSD is a good solution to have a fast medium that is reboot resilient, but it’s very expansive, not really made for 24×7 production yet and need to be doubled to make a reliable RAID1 array, while less efficient than a RAMDISK.

You can also have a fixe size ramdisk with tmpfs if your server doesn’t support shm:

mount tmpfs /path/to/magento/var/session -t tmpfs -o size=64m
mount tmpfs /path/to/magento/var/cache -t tmpfs -o size=64m

Magento cache system settings

Many misunderstanding led to problems in Magento local.xml parameters. First, let’s have an in depth look to the settings of the cache mechanism in Local.xml file.

Since the 1.4 CE version of Magento, the cache system is separated in this slow & fast backend.

Warning, some of the settings are only available in the versions post of Magento (or 1.10 for EE) for examplethus, take care of the version you use before inserting some of the settings below.

The cache can store different data :

  • The bloc cache
  • The Full Page Cache (for EE version)
  • Any other cache a developer wish to introduce to the backends by using Magento’s native cache methods (like Nitrogento does)

The file that parameter all those items can be found in the path app/etc/local.xml within your Magento directory. This file is in XML format and contains instructions to parameter most of the key settings of Magento cache, session and DB handling. For our precise topic, the cache settings are enclosed between <cache> and </cache>.

Warning, implicit declaration (not precisely setting all parameters by hand in the file) can lead unexpected behavior or less performing systems. Thus I do advise you to explicitly declare everything so you are sure you handle everything the way YOU want it and not the way default settings will do it for you.

In the local.xml, many XML tags are important:


If you only use the Backend tag, this will apply for the fast_backend and the slow backend will be set to File. This is a backward compatibility setting I guess. Of course, you can also use the explicit memcached Tag.

It seems it’s the same behavior in versions CE 1.4+ or EE 1.9+, they both describe the FastBackend type, here Magento want a value among the following :

  • memcached
  • File
  • apc
  • or even sqlite, xcache, eaccelerator, database

The Slow Backend can be picked among this list :

  • File
  • database
  • memcached
  • apc
  • ou encore sqlite, xcache, Varien_Cache_Backend_Eaccelerator, Varien_Cache_Backend_Database

On a distributed architecture (multiple front web servers), memcached and database are need as fast & slow backend to be able to share a common cache. (File is only local, as apc, etc.)

If you have a single server, a File backend can be very good choice. By the way Vinaï Kopp improved the performance of the File cache backend by creating a very good extension that you can find there:   

Actually, only the slow backend de type can only be set (up to me) to either file or database since they are the only two efficient backends to store a structured cache like Magento needs it.

A common misunderstanding is that the Slow Backend is only made to store data when the fast backend is full at 80%, which is only partly true. In fact, the slow backend is primarily made store the associations between the cache keys and their type where the fast backend store the “real” data that need to be cache. If you only have a fast backend and deactivate the slow backend, you lose all the benefit of the fast backend since Magento will not be able to make partial cleaning and will have to flush everything in both cache when an event that need a cache operation occurs.

But we don’t want to store data in the slow backend, if possible, since it’s not an optimized way of working and the slow backend can be really slower (depending on your config) than the fast one. We can prevent Magento from storing data in the slow backend by setting this<slow_backend_store_data>0</slow_backend_store_data>. This will only deactivate the storage of cache data in the slow backend but not disable it. It’s only available on Magento versions that are more recent than 1.8 EE and 1.4 CE (not included, meaning at least 1.9/1.5)

The 2 level cache has an option auto_refresh_fast_cache, which only ask for a auto refresh of the cache when it’s more than 80% full. If we enable it, the cache will only be refreshed when needed and when it’s loaded at less than 80%. It’s useless if you used Memcached. <auto_refresh_fast_cache>0</auto_refresh_fast_cache>

Setting this parameter to 0 allows to keep the cache active for a longer time.

If you deactivate the auto_refresh_fast_cache, you don’t have to make a SET and a GET every time there is a refresh but, on the other hand, you only rely on the lifetime to maintain your cache data, meaning you have to make it longer (see below)

The Lifetime is a key parameter that is often (if not always) forgotten. If not set explicitly, its default value is set 7200 seconds (2 hours), meaning your cache automatically expire every 40 min (I will explain below why the 2 hours auto magically became 40 mins).

The second parameter that plays on cache life cycle is priority, but we can’t influence it. If we put the cache lifetime to 86400 seconds, we get a fast cache far more efficient since it doesn’t get useless (and need to regenerate) every 40 min. The (simplified) formula that compute cache lifetime is the following:Fast_backend cache lifetime=(lifetime/(11 – priority))

Priority is always 8 and not alterable. Thus, the fast backend lifetime is  toujours divided by (11-8) compared to the value you input in your local.xml. The Zend cache default value here is 30 days but Magento overloads these settings to 40 mins (7200 seconds divided by 3).

If we put put <lifetime>86400</lifetime> in the <fast_backend_options> in the  (or), we then have a cache that only expire every 8 hours (86400/3/3600) instead of every 40 mins. Your modifications in the back office will still have a direct impact since the keys will be selectively invalidated but you have a more resilient cache. If we wish to be 24 hours on a normal production cycle, a lifetime value of 259 200 can then be used.

An interesting feature also to expand your fast backend cache size, Memcached is able to compress the data (low CPU cost, great space upgrade) <compression>1</compression>

An optimized local.xml <cache> section example

This is only an example that need to be adapted on your needs and infrastructure capabilities, it’s valid for a version CE or higher (or EE 1.10+). Don’t cut/paste, understand (and test before production)!

[Thanks to Adrien Urban of NBS System for the version information and the default values of Zend & Magento cache lifetime and Olivier for its input]

If you use an infrastructure with multiple front servers, I would advise this cache settings (Magento version EE 1.9+ or CE 1.5+) :

<!– <compression>1</compression> –>

if you use one single server, I would advise to go for a File backend with a RAMDISK. (Magento version EE 1.9+ or CE 1.5+) :


For a “rich” guy server, we can put a large space for memcached and skip the compression to avoid CPU use (still a very small win).

For an autonomous server (no multiple front web servers), you can also use APC as a fast_backend and file or DB for slow_backend.

For versions prior to, a solution that could be interesting to look at : AOE cache cleaner, you can launch it every hours through a Cron for example. Fabrizio Branca Blog post will give you more information around this.

Memcached : be careful

When Memcached Library and binaries are prior to version 3.0.3, the filling percentage calculus of the fast_backend is going wrong and Magento think it’s always full, falling every time in the slow_backend.

Magento import seems slow ?

Magento importing cycles can be quite slow, even very slow with 10 to 30 products imported per seconds. What about going at 150 to 300 products imported per seconds?

While importing your products (and only during this phase), you can ;

Totally deactivate the both cache, slow & fast. Magento will insert a cache entry (well actually doing 2 times the operation for a complicated reason) for every product inserted, which is totally useless and then fallback in the slow_backend when the fast will be full (most likely to happen on a large import). Not only this doesn’t help, this slow you down by 3 or 4 times! Don’t trust me ? Give a try ?

You should deactivate the binlog mechanism of Mysql (master/master or Master/slave setup of mysql) while importing. This is useless, you can bulk synchronize later using mysqldump or synchronize afterward. (20% faster)

Put the variable innodb_flush_log_at_trx_commit to the value 2 in your my.cnf config file to avoid over cautious writings while in a non production time frame

You want to go hell fast? Mount your DB file on a RAMDISK while importing (or an SSD drive)

Ready? Import and enjoy a 10x time faster import timing!


Nitrogento optimize a lot of different things in Magento, bringing FPC (Full Page Cache) the the CE version, optimizing your CSS/JS/HTML, your response headers, bringing new bloc cache entries, setting for you the Etags and expire values, creating the sprite for your home and managing the CDN/Media (among other features). It seriously boost your Magento store performances you can test & buy it there: The extension is presented on Magento’s website here and a benchmark can be found there : 

The code quality & engineering

It is important to understand that the code quality is the first way to leverage good or bad performances. The template quality, the way you build your code or organize the catalog all of this is part of the future results of your website performances. This Howto is not about Magento coding so we’ll stick to a “cache” oriented vision.

You should especially avoid some classical performance traps. First of all, the cache mechanism is not implicit in Magento, if you want something to be properly cache, you will have to declare this.

If a block contains an incorrect tag association, this bloc cache will be invalidated as soon as a content is modified, even if the modification doesn’t directly imply this bloc. For example if we modify a product, when the keys related to this catalog_product will be deleted, the cache for the bloc will also be invalidated. If the bloc product is properly tagged, no problem, this is normal that the bloc is invalidated, but if the subscription is too large, the bloc could be wiped from the cache uselessly.

Also, if you subscribe a bloc to too large or generic categories like catalog_product instead of something narrower and more precise like catalog_product_xx, the cache will be invalidated too often and uselessly. For example, a bloc cache showing product attributes on a product page that would be subscribed to a CATALOG_PRODUCT will be wiped if a modification occurs anywhere in the category, even if the product is not concerned.

On the opposite, if you do not properly the bloc cache and they are not related to the proper tags so that they can be refreshed when needed, we will get outdated information on the front.

As well, tricking the HTTP headers by adding some different information can lead a Reverse proxy to cache something that shouldn’t be cached or provided session related information to another user than the one who should get them.

Magento performances, advanced tips

To go beyond just the cache parameters, you can really get more out of your servers by optimizing other points. Here are some:


The hardware manufacturers have comparable performances between the 3 major: IBM/Dell et HP. For our needs, we have chosen HP for its good integration and advanced services. Dell and IBM also provide reliable hardware (IBM more than DELL, an IBM server can last 10 years where a Dell server usually gets some issues past the 4th year) but the Dell has a really more interesting pricing grid on average. On the processor level, Intel is far more expansive but faster for front end (Web) servers and AMD is a wise choice for your Databases since its less expansive and will get you very similar performances.

Don’t forget to have a look to your bios settings since some server are delivered with a very conservative energy setting that can impact deeply your performances… You can handle this energy policy far more efficiently with a Linux daemon, don’t let the Bios do it for you.

Server settings          

The OS question is not a real question, avoid Windows, go for a Unix, preferably a Linux since you will get a lot of community support, NetBSD is behaving good also, avoid Solaris for performances and compatibility issues. For the Linux kernel, I would advise you to test/try the different scheduler since they offer really different performances and the default settings are more oriented toward a “generic” use, more than a webserver use. Also add the GRSec/Pax for your own security and go for a static kernel build to avoid module backdoors. Don’t forget to get rid of what is not needed and finally use Irqbalance to properly spread your I/O Interrupts on your multiple core server. (usually only one is working to this task)

On your web server, mount your filesystem with the noatime and noadirtime flags. Those two flags take care for updating (thus they provoke a write on the disks) your file last access time, which is totally useless on a Webserver. Who cares to know when the logo.jpg has last been accessed?

/var/www ext3 defaults,grpquota,noatime,nodiratime,data=ordered 0 2

Web server parameters

If you use Apache, here is a decent configuration:

Timeout 120                                 # Two minutes is more than enough to tell it’s a timeout 
KeepAlive on                                # keepalive help to improve loading times 
MaxKeepAliveRequests 100 # but consume resources so we have to limit it 
KeepAliveTimeout 15              # and short timing help maintain the RAM load not too high 
HostnameLookups Off             # cost ressources, sockets and can be treated later on    
StartServers           5                   # Yes, mpm_prefork, it consumes RAM but is fast & reliable    
MinSpareServers        5            # especially when you are not sure that all your modules 
MaxSpareServers       10         # are threadsafe. We start 10 servers and allow it to grow 
ServerLimit          128               # if using a reverse proxy, limit the number of apache running for memory reasons
MaxClients           64                  # if using a reverse proxy, limit the number of apache running for memory reasons
MaxRequestsPerChild 1000

Nginx + Php FPM will give you better results but a painful process of gathering your htaccess (not supported) in a configuration file. Besides, even if faster than Apache, Nginx is not officially supported by Magento and Zend, this could leverage issues when talking to Magento Support.

But obviously, gathering all your htaccess directives in a central file (the Vhost for exampple) will avoid the webserver the burdening task of reading all htaccess every time a file is accessed. Good optimization to take as well, some few % more for your perfs. If you do a script to do this and  have a CE version, you can seriously think about moving to Nginx + Php-fpm.

From time to time, if you store sessions on disk, you should clean the old sessions that are useless (here 5 days) :

find /path/to/session/* -mtime +5 -exec rm {} \;

In your Vhost file or .htaccess, you can add this :

ExpiresActive on 
ExpiresByType image/jpg “access plus 6 months” 
ExpiresByType image/png “access plus 6 months” 
ExpiresByType image/jpeg “access plus 6 months” 
ExpiresByType image/gif “access plus 6 months” 
ExpiresByType image/png “access plus 6 months” 
ExpiresByType text/ico “access plus 6 months” 
ExpiresByType image/ico “access plus 6 months” 
ExpiresByType image/icon “access plus 6 months” 
ExpiresByType image/x-icon “access plus 6 months” 
ExpiresByType application/x-shockwave-flash “modification plus 6 months” 
ExpiresByType text/css “access plus 1 week” 
ExpiresByType text/javascript “access plus 1 week” 
ExpiresByType text/xml “modification plus 2 hours” 
ExpiresByType image/ “access plus 6 months” 
BrowserMatch ^Mozilla/4 gzip-only-text/html 
BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html 
BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/htm 
Header append Vary User-Agent env=!dont-var 
AddOutputFilterByType DEFLATE text/css application/x-javascript text/x-component text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon 
FileETag MTime Size

You will then leverage browser caching, allowing your visitors to store the already seen objects, making there surf way faster and offloading your web server / reverse proxy at the same time. A definite win/win situation you should take advantage of. Warning, caching HTML (and using a default value for all ressources) can make trouble with Magento, especially with the Cart.

Reverse proxy

A reverse proxy seriously boosts your website performances, even if only installed locally and not in an infrastructure mode (dedicated servers). Nginx & Varnishare really good at this. Varnish is top for a localhost install and Nginx is perfect for a infrastructure mode. Those Reverse proxy will store your frequently accessed file, especially the statics and offload the webservers from this task and they can as well store HTML if allowed to.

With Varnish, you can try to use the ESI protocol to add an efficient way of caching the blocs also.

Media server & CDN

Splitting the media from the main www servers is always a good idea for two reasons. First, you will offload the www server from this task and second your customers will be able to get the static resources way faster. The typical browser is able (allowed) to fetch between 8 and 12 resources (10 on average let’s say) in parallel from a given CNAME pointer (www, cdn, media, ftp etc.). By multiplying the CNAMES for your resources, you allow the fetch them 10 by 10 from each CNAME instead of sequentially fetch them 10 by 10 from the same CNAME.

PHP Parameters

I won’t put a full config file but here are some key settings :

output_buffering = 4096 
max_execution_time = 7200 
max_input_time = -1
memory_limit = 256M
default_socket_timeout = 60
pdo_mysql.cache_size = 2000
mysql.allow_persistent = On

Warning: Putting in a php.ini or on a .htaccess file a very high memory limit for your PHP threads is dangerous and not recommended. If, for any reason a PHP code (Cron or page) start consuming a lot of memory and maximize to a high limit, like 2 Go for example, you can get a resource shortage and start swapping, which is a definite performance killer.

Your php_memory_limit value should stay under 512 Mo, preferably 256 and it’s far more efficient and safe to optimize the code than set a very high memory limit. The PHP_memory_limit is the same for all the php threads executed by an apache server, if 4 or 5 threads start over consuming memory and the limit is set to 2 Go, the server very very quickly runs out of RAM.

Regarding the Max_Execution_Time parameter, 30 seconds is usually enough but if you have a very long running  task (import / export in Crons), you can still raise it. The problem would be that if some code of the website never exits properly, then those process will stay in memory (consuming memory, see above) and then, maybe, crash your server. You have to know that those “Cron crash” are the number one reason for server troubles at the service desk. Not to be taken lightly.

Parameters for the database

If your Mysql is a version 5.1 or prior, it’s useless to put more than 4 cores to your DB server, it will simply not use them. 

Some interesting settings for your Mysql server :

Innodb_buffer_pool_size = 3 Go # 66% of the available memory, (3 Go on a 8 Go ram server housing web and database for exemple)
Innodb_thread_concurrency = 8   # 2 * (total number of cores)        
Thread_cache_size = 128 
Max_connections = 512 # (should be enough, (max simultaneous connection + 1 * 1,5)) 
Thread_concurrency = 8
Table_cache = 1024 # (should be enough for everybody…) 
Query_cache_size = 128M 
Query_cache_limit = 2M 
Sort_buffer_size = 8M 
innodb_flush_log_at_trx_commit = 2 # (safe vs speed, 0 speed, 1 safe, 2 mixed) 
innodb_log_buffer_size = 16M 
innodb_log_files_in_group = 2 
innodb_additional_mem_pool_size = 8M 
innodb_log_file_size = 512 M
innodb_log_files_in_group = 2 
join_buffer_size = 8M 
tmp_table_size = 256M 
key_buffer = 32M 
innodb_data_file_path = ibdata1:3G;ibdata2:1G:autoextend 
max_connect_errors = 10 
table_cache = 1024 
max_allowed_packet = 16M 
max_heap_table_size = 256M 
read_buffer_size = 2M 
read_rnd_buffer_size = 16M 
bulk_insert_buffer_size = 64M 
myisam_sort_buffer_size = 128M 
myisam_max_sort_file_size = 10G 
myisam_max_extra_sort_file_size = 10G 
myisam_repair_threads = 1

The most efficient official Mysql backend for your database is InnoDB plugin that is faster than the built in innodb.  The Percona innoDB backend is offering a serious performance boost but you rely on them to update their connector when Mysql evolve (which they do actually) and it also allow for backup while operating (hot backup).


  • Don’t forget to activate the cache (bloc & FPC if you have an EE 1.9+ (before it’s not really working))
  • Activate mage_compiler
  • Deactivate the debug mode (it’s just killing your performances)
  • Activate flat catalog
  • Switch the search engine to fulltext mode

Security of your Magento site

The best Web Application Firewall now is NAXSI, Naxsi is an NGINX module that is Opensource and Free. it consume less than 1% performances to Nginx and do a far more evolve job (Whitelisting) than Mod_security, without impairing the Web server performance like Mod_sec do. Naxsi has been integrated by OWASP:

Here are some keys for your safety:

  1. Pur your local.xml file in a chmod 500 configuration (this will avoid you this, exposing your database login/pass publicly to Google…)
  2. Filter all access to your Web server and limit to precise IP, except 80 & 443 ports (this script is basic but should do part of the job, don’t forget to replace xxx by your IP). If you use FTP (unsafe) instead of SCP, you will have to add the FTP ports also.


    /sbin/iptables -P INPUT DROP 
    /sbin/iptables -P OUTPUT ACCEPT 
    /sbin/iptables -I INPUT -s -p tcp –dport 22 -j ACCEPT 
    /sbin/iptables -I INPUT -p tcp –dport 80 -j ACCEPT 
    /sbin/iptables -I INPUT -p tcp –dport 443 -j ACCEPT  

  3. Install & parameter GRsec/PAX
  4. Use real & valid SSL certificates, issued by trustworthy people
  5. Put a REAL password on your backoffice (a random one of at least 8 alphanum chars if possible)
  6. Maintain your Magento version up to date or at least apply security patchs
  7. Maintain your services & Linux updated or apply the patchs
  8. Add an htaccess authentication in your admin directory (and preferably rename/rewrite this dir to another name). Do the same for any phpmyadmin. (Don’t put any “limit get, post”, this allow hackers to circumvent the protection). Syntax is easy and can be put in your apache conf file or an htaccess file (the allow from alow you to bypass the login/pass from your IP) :


    AuthUserFile /path/to_file 
    AuthName « Admin area » 
    AuthType Basic Order allow,deny allow from 
    Require valid-user Satisfy any  

  9. Remember to filter your user inputs (use the ZF or Magento methods to do it) and use Naxsi. You can learn more about this here.
  10. Make your website pentest/audit by security professionals before going weapon hot
  11. Add to your Apache configuration  file :


    ServerTokens Prod # the less they know, the better
    ServerSignature Off # same concept

  12. If you insist on using Mod_security instead of Naxsi, loosing a bit of perf and getting less security, you can use this very basic configuration :

SecFilterEngine On 
SecFilterDefaultAction “deny,log,status:403″ 
SecFilterScanPOST On 
SecFilterCheckURLEncoding On 
SecFilterCheckUnicodeEncoding Off 
SecFilterForceByteRange 1 255 
SecAuditEngine RelevantOnly 
SecAuditLog logs/audit_log 
SecFilterDebugLog logs/modsec_debug_log 
SecFilterDebugLevel 0 
SecFilterSelective REQUEST_METHOD “!^(GET|HEAD)$” chain 
SecFilterSelective HTTP_Content-Type “!(^application/x-www-form-urlencoded$|^multipart/form-data;)” 
SecFilterSelective REQUEST_METHOD “^POST$” chain 
SecFilterSelective HTTP_Content-Length “^$” 
SecFilterSelective HTTP_Transfer-Encoding “!^$” 
SecFilter /etc/passwdSecFilter /bin/ls 
SecFilterSelective REQUEST_METHOD “TRACE” 
SecFilterSelective THE_REQUEST “/etc/shadow” 
SecFilterSelective THE_REQUEST “/bin/ps”SecFilter “\.\./” 
SecFilterSelective THE_REQUEST “wget ” 
SecFilterSelective HTTP_USER_AGENT “DTS Agent” 
SecFilter “delete[[:space:]]+from” 
SecFilter “create[[::space:]]+table” SecFilter “update.+set.+=” 
SecFilter “insert[[:space:]]+into” 
SecFilter “select.+from” 
SecFilterSelective ARGS “drop[[:space:]]+database” 
SecFilterSelective ARGS “drop[[:space:]]+table”



Thanks to NBS system for their contribution.


3 Responses to “Magento – How to optimize”

  1. Bess Ardis 30/05/2018 at 12:47 pm #

    It’s the best time to make a few plans for the longer term and it’s time to be happy. I’ve learn this publish and if I could I wish to suggest you few attention-grabbing things or suggestions. Perhaps you can write next articles regarding this article. I wish to read more things about it!

  2. เบอร์สวย 04/06/2020 at 8:32 pm #

    you’re actually a just right webmaster. The website
    loading speed is incredible. It kind of feels that you are doing any unique trick.
    In addition, The contents are masterpiece. you’ve performed a magnificent job in this

Leave a Reply