Linux - Shine Servers: Illuminating IT Solutions Since 2012 https://www.shineservers.com Your gateway to cutting-edge Web Hosting, Cloud & Dedicated Servers, Development, Digital Marketing, and more. Let's build the future together. Tue, 05 Sep 2023 18:03:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.shineservers.com/wp-content/uploads/2025/01/cropped-Facivon-32x32.png Linux - Shine Servers: Illuminating IT Solutions Since 2012 https://www.shineservers.com 32 32 201548004 How to Set Up a VPN and RDP on Local IPs in CentOS 7 https://www.shineservers.com/2023/07/09/how-to-set-up-a-vpn-and-rdp-on-local-ips-in-centos-7/ https://www.shineservers.com/2023/07/09/how-to-set-up-a-vpn-and-rdp-on-local-ips-in-centos-7/#respond Sun, 09 Jul 2023 14:33:38 +0000 https://www.shineservers.com/?p=14290 For IT professionals or those dabbling in network administration, the ability to establish secure connections between remote systems is a must. This blog post will guide you through the steps to set up a VPN server and configure RDP for local IPs on a CentOS 7 system. Setting up the VPN Server Firstly, you need […]

The post How to Set Up a VPN and RDP on Local IPs in CentOS 7 first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>

For IT professionals or those dabbling in network administration, the ability to establish secure connections between remote systems is a must. This blog post will guide you through the steps to set up a VPN server and configure RDP for local IPs on a CentOS 7 system.

Setting up the VPN Server

Firstly, you need to install an OpenVPN server. OpenVPN is an open-source VPN software that enables secure point-to-point connections.

  1. To install OpenVPN and easy-rsa packages, use the following command:

    sudo yum install -y openvpn easy-rsa
  2. After the installation, navigate to the OpenVPN directory and copy the sample configuration file:

    cd /etc/openvpn/ sudo cp /usr/share/doc/openvpn-*/sample/sample-config-files/server.conf ./
  3. You need to make necessary edits to the server.conf file for the proper operation of your VPN. Open the file in your preferred text editor (for example, nano or vi) and look for the lines starting with “dh”, “ca”, “cert”, and “key”, and ensure they point to the correct locations. Also, uncomment the “push “redirect-gateway def1 bypass-dhcp”” line.
  4. Restart the OpenVPN service to apply the changes:

    sudo systemctl restart openvpn@server
  5. Make sure OpenVPN starts on boot:

    sudo systemctl enable openvpn@server

Configuring DNS

Now, for your VPN users to resolve domain names correctly, you need to set up a DNS server and push its IP to them.

  1. Install a DNS server such as BIND: sudo yum install bind
  2. Configure the BIND service and set the DNS forwarders to point to your preferred DNS servers.
  3. In the OpenVPN configuration file, push your DNS server’s IP to the clients:

    push "dhcp-option DNS 192.168.1.100"
  4. Restart the OpenVPN service to apply the changes.

Setting up RDP on Local IPs

For the purpose of RDP connections to local IPs, ensure that the target VMs have RDP enabled. In the case of Windows VMs, you can enable RDP through the system properties settings.

Please note that you need to configure the firewall rules correctly to allow RDP connections over the VPN.

Wrapping Up

With these settings in place, you will be able to route your internet traffic securely through the VPN, as well as establish RDP connections to your local VMs. Note that your network performance might decrease because all traffic has to be encrypted and routed through the VPN server. It is always important to consider this trade-off when setting up VPN services.


This is a general overview, and the specifics might vary based on your environment and needs. Always make sure to thoroughly test your setup to ensure everything is working as expected.

The post How to Set Up a VPN and RDP on Local IPs in CentOS 7 first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2023/07/09/how-to-set-up-a-vpn-and-rdp-on-local-ips-in-centos-7/feed/ 0 14450
Root Password Recovery for RHEL, CentOS 7 Linux https://www.shineservers.com/2023/01/04/root-password-recovery-for-rhel-centos-7-linux/ https://www.shineservers.com/2023/01/04/root-password-recovery-for-rhel-centos-7-linux/#respond Wed, 04 Jan 2023 11:36:16 +0000 https://www.shineservers.com/?p=14197 One of our client recently stumbled upon with a query that he forgot the root password and can’t loose any data even if he reinstall OS to gain access again. Our team however suggested a password reset for this Centos 7 system. Here’s all the steps we followed him to gain back access to his […]

The post Root Password Recovery for RHEL, CentOS 7 Linux first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
One of our client recently stumbled upon with a query that he forgot the root password and can’t loose any data even if he reinstall OS to gain access again. Our team however suggested a password reset for this Centos 7 system. Here’s all the steps we followed him to gain back access to his machine.

OS: Centos 7 (Core 7), RHEL7

We already have IPMI Access, so we accessed the server using KVM.

  1. The first step is to reboot the system and edit the grub2 boot parameters. Once you see the menu you can press arrow keys to hold the screen. Then you must press ‘e’.
  2. Look for a line that mentions linux16 (or linuxefi if you are using UEFI bios). You may need to use the arrow keys to scroll down.  At the end of the linux16 or linuxefi line, find and replace the rhgb quiet parameters with rd.break enforcing=0
  3. Once you have edited the parameters accordingly, hit CTRL-X to start the boot process with the new parameters. The system should boot into the root system.
  4. Enter the following command to remount the sysroot filesystem as read/write: 
    mount -o remount,rw /sysroot
  5. Now we chroot into the sysroot, using the following command: chroot /sysroot
  6. We can use the passwd command to change the root password.
  7. Issue the following command to bring us back to the switch_root:/# prompt: exit
  8. Enter the following command to remount the sysroot filesystem as read-only once again: 
    mount -o remount,ro /sysroot
  9. Now we can exit the session and allow the system to reboot using the following command: exit
  10. Allow the system to boot normally and login as root using the new password that you set in step 6.

The post Root Password Recovery for RHEL, CentOS 7 Linux first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2023/01/04/root-password-recovery-for-rhel-centos-7-linux/feed/ 0 14449
How To Setup Virtualisation With KVM On A CentOS (SolusVM Slave) https://www.shineservers.com/2016/09/03/setup-virtualisation-kvm-centos-solusvm-slave/ https://www.shineservers.com/2016/09/03/setup-virtualisation-kvm-centos-solusvm-slave/#respond Sat, 03 Sep 2016 16:33:03 +0000 http://blog.shineservers.com/?p=3114 Steps To Setup: Part 1 – Disk Setup Fdisk is the most commonly used command to check the partitions on a disk. The fdisk command can display the partitions and details like file system type. However it does not report the size of each partitions. $ sudo fdisk -l You cannot create a Linux partition […]

The post How To Setup Virtualisation With KVM On A CentOS (SolusVM Slave) first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>

Steps To Setup:

Part 1 – Disk Setup

Fdisk is the most commonly used command to check the partitions on a disk. The fdisk command can display the partitions and details like file system type. However it does not report the size of each partitions.

$ sudo fdisk -l

You cannot create a Linux partition larger than 2 TB using the fdisk command. The fdisk won’t create partitions larger than 2 TB. This is fine for desktop and laptop users, but on server you need a large partition. For example, you cannot create 3TB or 4TB partition size (RAID based) using the fdisk command. It will not allow you to create a partition that is greater than 2TB.

Creating 4 TB Partition Size

To create a partition start GNU parted as follows:

$ parted /dev/sdb

Creates a new Partition Table:

$ (parted) mklabel gpt

Next, set the default unit to TB, enter:

$ (parted) unit TB

To create a 4 TB partition size, enter:

$ (parted) mkpart primary 0.00TB 4.00TB

To print the current partitions, enter:

$ (parted) print

Quit and save the changes, enter:

$ (parted) quit
Use the mkfs.ext4 command to format the file system: (Optionally You can use mkfs.ext3 if needed)

$ mkfs.ext4 /dev/sdb1
Create the PV through following command:

$ pvcreate /dev/sdb1

You can check that new PV through this command:

$ pvscan

Create the Volume Group:

$ vgcreate -s 32M vg1 /dev/sdb1

You can check that new volume group through this command:

$ vgdisplay
Part 2 – Network Setup

Bridging requires the bridge-utils package to be installed on the server. To check if it’s installed, do the following:

$ rpm -q bridge-utils

If you get an output – it’s installed, if not, it needs installing:

$ yum install bridge-utils

Before setting up your bridge, the contents of /etc/sysconfig/network-scripts/ifcfg-eth0 will look like the following:

DEVICE=eth0
BOOTPROTO=static
BROADCAST=102.100.152.255
HWADDR=00:27:0E:09:0C:B2
IPADDR=102.100.152.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=102.100.152.0
ONBOOT=yes

To back up your current ifcfg-eth0 before modification:

1. Run the following command:

$ cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/backup-ifcfg-eth0

2.Create the bridge file:

$ nano -w /etc/sysconfig/network-scripts/ifcfg-br0

3. Copy parts of ifcfg-eth0 to it:

DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
BROADCAST=102.100.152.255
IPADDR=102.100.152.2
NETMASK=255.255.255.0
NETWORK=102.100.152.0
ONBOOT=yes

4. Save that file and edit ifcfg-eth0:

$ nano -w /etc/sysconfig/network-scripts/ifcfg-eth0

5. Remove the networking parts and specify the bridge:

DEVICE=eth0
HWADDR=00:27:0E:09:0C:B2
IPV6INIT=yes
IPV6_AUTOCONF=yes
ONBOOT=yes
BRIDGE=br0

6. Bridge is  set up. Make sure that the changes are correct and restart the networking:

$ /etc/init.d/network restart

7. Once it’s restarted you see the new bridge using the ifconfig command:

[root@bharat ~]# ifconfig
br0       Link encap:Ethernet  HWaddr 00:27:0E:09:0C:B2
inet addr:102.100.152.2  Bcast:102.100.152.255  Mask:255.255.255.0
inet6 addr: fe80::227:eff:fe09:cb2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:48 errors:0 dropped:0 overruns:0 frame:0
TX packets:67 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2984 (2.9 KiB)  TX bytes:13154 (12.8 KiB)

eth0      Link encap:Ethernet  HWaddr 00:27:0E:09:0C:B2
inet6 addr: fe80::227:eff:fe09:cb2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:31613 errors:0 dropped:0 overruns:0 frame:0
TX packets:9564 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:2981335 (2.8 MiB)  TX bytes:2880868 (2.7 MiB)
Memory:d0700000-d0720000

 

Part 3 – Installing a SolusVM KVM Slave:

In SSH as root do the following:

$ wget http://soluslabs.com/installers/solusvm/install

$ chmod 755 install

$ ./install

Now, follow the steps as shown in Video.

The install will now do it’s work.You will get next output (output text may vary)

Once the installer is complete you will be presented with the slave keys and any further instructions for your install type.

The post How To Setup Virtualisation With KVM On A CentOS (SolusVM Slave) first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2016/09/03/setup-virtualisation-kvm-centos-solusvm-slave/feed/ 0 3119
Adding Secondary IP Addresses (CentOS/RHEL) https://www.shineservers.com/2016/06/11/adding-secondary-ip-addresses-centosrhel/ https://www.shineservers.com/2016/06/11/adding-secondary-ip-addresses-centosrhel/#respond Sat, 11 Jun 2016 21:59:00 +0000 http://blog.shineservers.com/?p=3105 There are plenty of reasons you would need to add secondary IP addresss (and everyone agrees that SEO is not one of them). Getting a secondary IP address is a simple process if it is done for the right reasons and done correctly. You do NOT need additional NIC cards but you will be creating […]

The post Adding Secondary IP Addresses (CentOS/RHEL) first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
There are plenty of reasons you would need to add secondary IP addresss (and everyone agrees that SEO is not one of them). Getting a secondary IP address is a simple process if it is done for the right reasons and done correctly. You do NOT need additional NIC cards but you will be creating virtual adapters as the secondary IP will be routing through the primary IP.

Also, this is a great thing to do at home as I’ve done it to run multiple internal IP addresses on one server to run multiple applications across the same ports (for KISS** sake). Please note that I am doing this is in a virtual testing environment so your settings will definitely be different.

** KISS = Keep It Stupid Simple **

You will need to be the root user and navigate to your /etc/sysconfig/network-scripts

# cd /etc/sysconfig/network-scripts

When getting a list of files in the directory you will see “ifcfg-eth0” (or eth1 if you’re doing it for a different adapter)

# ls -l | grep ifcfg-eth
-rw-r–r– 1 root root 119 Jan 11 19:16 ifcfg-eth0
-rw-r–r– 1 root root 119 Jan 3 08:45 ifcfg-eth0.bak
-rw-r–r– 1 root root 119 Feb 24 04:34 ifcfg-eth1
-rw-r–r– 1 root root 128 Jan 19 18:20 ifcfg-eth1.bak

Now adding the virtual adapters is easy. Basically if the main adapter is called “eth0” you have to call the next (virtual) adapter in a sequential order like so:

ifcfg-eth0 (primary adapter, physical)
ifcfg-eth0:1 (first virtual adapter to the physical primary adapter)
ifcfg-eth0:2 (second virtual adapter to the physical primary adapter)
and so on…

That being said, lets go ahead and copy our primary adapter configuration file and name it to be the first virtual adapter for the physical primary:

# cp ifcfg-eth0 ifcfg-eth0:1

# ls -l | grep ifcfg-eth
-rw-r–r– 1 root root 119 Jan 11 19:16 ifcfg-eth0
-rw-r–r– 1 root root 119 Feb 24 08:53 ifcfg-eth0:1
-rw-r–r– 1 root root 119 Jan 3 08:45 ifcfg-eth0.bak
-rw-r–r– 1 root root 119 Feb 24 04:34 ifcfg-eth1
-rw-r–r– 1 root root 128 Jan 19 18:20 ifcfg-eth1.bak

Now, we have to configure this virtual adapter to be: a static IP (of course), no hardware address (MAC), configure netmask and of course rename the device.

# vim ifcfg-eth0:1
DEVICE=eth0:1
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.1.1.2
NETMASK=255.255.255.0

There is no need to specify a MAC address as it is a virtual adapter and there is also no need to specify a default gateway as it is already routed through the primary adapter. Basically there are only four things that you will need to change:

File name for the adapter itself

DEVICE= device name (should correspond with the file name)
IPADDR= ip address
NETMASK= netmask

Afterwards, just restart the networking service:

# service network restart

That’s it; lets check ifconfig to make sure the virtual adapter is there and working:

# ifconfig eth0:1
eth0:1 Link encap:Ethernet HWaddr 08:00:27:ED:05:B7
inet addr:10.1.1.2 Bcast:10.1.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

# ping 10.1.1.2
PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.
64 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=0.029 ms
64 bytes from 10.1.1.2: icmp_seq=4 ttl=64 time=0.029 ms
— 10.1.1.2 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.029/0.043/0.073/0.018 ms

If you’re not sure if you’ve done it right and you do not want to restart the entire network server, you can use the following:

# ifup eth0:1

The post Adding Secondary IP Addresses (CentOS/RHEL) first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2016/06/11/adding-secondary-ip-addresses-centosrhel/feed/ 0 3117
Add GNOME to a CentOS Minimal Install https://www.shineservers.com/2016/04/23/add-gnome-centos-minimal-install/ https://www.shineservers.com/2016/04/23/add-gnome-centos-minimal-install/#respond Sat, 23 Apr 2016 18:44:54 +0000 http://blog.shineservers.com/?p=3096 Introduction In most instances, the Linux servers I setup are used to host the Oracle database software and only require using the Command-Line Interface (CLI) for the OS. This is beneficial because I only need to perform a minimal installation and can add only those required Linux packages (RPMs) needed to support the database. However, […]

The post Add GNOME to a CentOS Minimal Install first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
Introduction

In most instances, the Linux servers I setup are used to host the Oracle database software and only require using the Command-Line Interface (CLI) for the OS. This is beneficial because I only need to perform a minimal installation and can add only those required Linux packages (RPMs) needed to support the database. However, there are situations where I need to access a graphical desktop in order to install or run certain Graphical User Interface (GUI) applications.

This guide provides the steps needed to add the GNOME Desktop to a CentOS minimal installation where the OS was installed without the X Window System.

CentOS 6

In this section, the GNOME desktop will be added to a new server running CentOS 6.2 (x86_64) after performing a “Minimal” install.

Install Desktop Packages

# yum -y groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts"

You can also install the following optional GUI packages.

# yum -y groupinstall "Graphical Administration Tools"

# yum -y groupinstall "Internet Browser"

# yum -y groupinstall "General Purpose Desktop"

# yum -y groupinstall "Office Suite and Productivity"

# yum -y groupinstall "Graphics Creation Tools"

Finally, if you wanted to add the K Desktop Environment (KDE).

# yum -y groupinstall kde-desktop

When using yum groupinstall, the groupinstall option only installs default and mandatory packages from the group. There are times when you also want to include optional packages within a group. I have not figured out (yet) how to control which package types to install (group package “policy”) from the command-line using yum. The only method I know of to also include optional packages is to edit the /etc/yum.conf file and add the following to the [main] section:

group_package_types=default mandatory optional

The reason I mention this is because I wanted to install “Terminal emulator for the X Window System” (xterm) which is under the group “Legacy X Window System compatibility”. xterm happens to be an optional package and did not get installed until I added group_package_types=default mandatory optional to /etc/yum.conf.

# yum -y groupinstall "Legacy X Window System compatibility"

I did find a plug-in for yum that allows users to specify which package types within a package group should be installed when using yum groupinstall.

http://projects.robinbowes.com/yum-grouppackagetypes/trac

Enable GNOME

Since the server was previously running on CLI mode, we need to change the initialization process for the machine to boot up in GUI mode.

Open /etc/inittab using a text editor and change following line:

id:3:initdefault:

To:

id:5:initdefault:

After making the change, reboot the machine.

# init 6

Note that you can switch from GUI to CLI mode manually by using following method:

GUI to CLICtrl + Alt + F6
CLI to GUICtrl + Alt + F1

Installing Additional Applications

After logging in to the GNOME Desktop, you can now go to System > Administration > Add/Remove Software to manage application in CentOS.

By using this wizard, you can install various applications similar to yum but through a GUI. Applications installed using this method will appear in the Application menu list.

The post Add GNOME to a CentOS Minimal Install first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2016/04/23/add-gnome-centos-minimal-install/feed/ 0 3114
Resetting Root Password Using Rescue Mode https://www.shineservers.com/2016/03/28/resetting-root-password-using-rescue-mode/ https://www.shineservers.com/2016/03/28/resetting-root-password-using-rescue-mode/#respond Mon, 28 Mar 2016 18:42:22 +0000 http://blog.shineservers.com/?p=3090 It’s been a million dollar question for anyone who is stuck and don’t remember the root password, If you are not able to reset the password for your Linux Server then you will need to place the server into rescue mode and chroot the file system of the server and run passwd to update the root password. Sounds […]

The post Resetting Root Password Using Rescue Mode first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
It’s been a million dollar question for anyone who is stuck and don’t remember the root password, If you are not able to reset the password for your Linux Server then you will need to place the server into rescue mode and chroot the file system of the server and run passwd to update the root password. Sounds easy? Let me show you how 🙂

  1. Place Server into Rescue Mode or If you have no idea how to do that then ask your hosting provider to do that for you.
  2. Connect to the rescue mode server using ssh as normally you do.
  3. It is always suggested to run ‘fsck’ (File System check) every time you get. It will save you hassles of it automatically running during a reboot, causing boot time to take longer than expected.

This could be either /dev/sda1 or /dev/sdb1 depending on your setup.

I will be using /dev/sda1 in the reset of the example:

fsck -fyv /dev/sda1

This will force a file system check (f flag), automatically respond ‘yes’ to any questions prompted(y flag), and display a verbose output at the very end(v flag).

Mounting the file system:

a. Make a temporary directory:

mkdir /mnt/rescue

b. Mount to that temp directory

mount /dev/sda1 /mnt/rescue
chroot /mnt/rescue

4. We are going to use ‘chroot’. chroot allows you to set the root of the system in a temporary environment.

5. Now that we are chroot-ed into your original drive, all you have to do is run ‘passwd’ to update your root password on the original Server’s hard drive.

passwd

(This will prompt you for your new password twice, and then update the appropriate files.)

6. Exit out of chroot mode.

exit

7. Unmount your original drive

umount /mnt/rescue

8. Exit out of SSH and Exit Rescue Mode.

The post Resetting Root Password Using Rescue Mode first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2016/03/28/resetting-root-password-using-rescue-mode/feed/ 0 3112
How To Increase Page Load Speed with Apache KeepAlive https://www.shineservers.com/2014/04/13/increase-page-load-speed-apache-keepalive/ https://www.shineservers.com/2014/04/13/increase-page-load-speed-apache-keepalive/#respond Sun, 13 Apr 2014 15:46:47 +0000 http://blog.shineservers.com/?p=2848 The KeepAlive directive for Apache allows a single request to download multiple files. So on a typical page load, the client may need to download HTML, CSS, JS, and images. When KeepAlive is set to “On”, all of these files can be downloaded in a single request. If KeepAlive is set to “Off”, each file […]

The post How To Increase Page Load Speed with Apache KeepAlive first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
The KeepAlive directive for Apache allows a single request to download multiple files. So on a typical page load, the client may need to download HTML, CSS, JS, and images. When KeepAlive is set to “On”, all of these files can be downloaded in a single request. If KeepAlive is set to “Off”, each file download would require it’s own request.

You can control how many files can be downloaded in a single request with the MaxKeepAliveRequests directive, which defaults to 100. If you have pages with a lot of different files, consider putting this higher so that your pages will load in a single request.

One thing to be cautious of when using KeepAlive, is the connections will remain open waiting for new requests once the connection is established. This can use up a lot of memory, as the processes sitting idly will be consuming RAM. You can help avoid this with the KeepAliveTimeout directive, which specifies how long the connections remain open. I generally put this below 5, depending on the average load times of my site.

An important factor when deciding to use KeepAlive is the CPU vs. RAM usage requirements for your server. Having KeepAlive On will consume less CPU as the files are served in a single request, but will use more RAM because the processes will sit idly. Here is an example of KeepAlive settings I use:

KeepAlive             On
MaxKeepAliveRequests  50
KeepAliveTimeOut      3

Once KeepAlive is on you can see the following header in your server’s response:

Connection:  Keep-Alive


The post How To Increase Page Load Speed with Apache KeepAlive first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2014/04/13/increase-page-load-speed-apache-keepalive/feed/ 0 2848
How To Increase Page Load Speed with Apache mod_deflate https://www.shineservers.com/2014/04/13/increase-page-load-speed-apache-mod_deflate/ https://www.shineservers.com/2014/04/13/increase-page-load-speed-apache-mod_deflate/#respond Sun, 13 Apr 2014 15:44:25 +0000 http://blog.shineservers.com/?p=2844 Apache’s mod_deflate is an Apache module that will compress output from your server before it is sent to the client. If you have newer version of Apache the mod_deflate module is probably loaded by default, but it may not be turned on. To check if compression is enabled on your site, first verify that the […]

The post How To Increase Page Load Speed with Apache mod_deflate first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
Apache’s mod_deflate is an Apache module that will compress output from your server before it is sent to the client. If you have newer version of Apache the mod_deflate module is probably loaded by default, but it may not be turned on. To check if compression is enabled on your site, first verify that the module is loaded in your httpd.conf file:

LoadModule deflate_module modules/mod_deflate.so

Then you can use to following web based tool to verify compression:

http://www.whatsmyip.org/http-compression-test/

For my server, CentOS 6.x, the module was loaded by default but compression was not on until I set up the configuration file. You can place your compression configurations into your httpd.conf file, an .htaccess file, or a .conf file in your httpd/conf.d directory. My base configuration file is as follows:

<IfModule mod_deflate.c>
    AddOutputFilterByType DEFLATE text/html
    AddOutputFilterByType DEFLATE text/plain 
    AddOutputFilterByType DEFLATE text/css 
    AddOutputFilterByType DEFLATE text/javascript
    AddOutputFilterByType DEFLATE text/xml
</IfModule>

The configuration file specifies that all the html, plain, css, and javascript text files should be compressed before being sent back to the client. When writing your configuration file, you don’t want to compress the images because the images are already compressed using their own specific algorithms and doubling compression just wastes CPU. Depending on the server you are running, you may want a more comprehensive compression schema based on different file types and browsers. More information can be found in the below referenced Apache docs.

Another thing to consider is that while the gzip compression algorithm is fast and efficient for smaller text files, it can be cumbersome on your CPU when trying to compress larger files. Be wary when adding compression to non text files > 50 KB.

When you examine the HTTP headers of your server’s response, you will see the following headers for compressed content:

Content-Encoding: gzip
Vary: Accept-Encoding

Here is another default configuration file taken from Ubuntu 12.10:

<IfModule mod_deflate.c>
    # these are known to be safe with MSIE 6
    AddOutputFilterByType DEFLATE text/html text/plain text/xml    # everything else may cause problems with MSIE 6
    AddOutputFilterByType DEFLATE text/css
    AddOutputFilterByType DEFLATE application/x-javascript application/javascript 
    AddOutputFilterByType DEFLATE application/ecmascript
    AddOutputFilterByType DEFLATE application/rss+xml
</IfModule>

Reference
http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

 

The post How To Increase Page Load Speed with Apache mod_deflate first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2014/04/13/increase-page-load-speed-apache-mod_deflate/feed/ 0 2844
How To Set Up mod_security with Apache on Debian/Ubuntu https://www.shineservers.com/2014/04/13/set-mod_security-apache-debianubuntu/ https://www.shineservers.com/2014/04/13/set-mod_security-apache-debianubuntu/#respond Sun, 13 Apr 2014 15:40:29 +0000 http://blog.shineservers.com/?p=2841 Installing mod_security Modsecurity is available in the Debian/Ubuntu repository: apt-get install libapache2-modsecurity Verify if the mod_security module was loaded. apachectl -M | grep --color security You should see a module named security2_module (shared) which indicates that the module was loaded. Modsecurity’s installation includes a recommended configuration file which has to be renamed: mv /etc/modsecurity/modsecurity.conf{-recommended,} Reload Apache service […]

The post How To Set Up mod_security with Apache on Debian/Ubuntu first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
Installing mod_security

Modsecurity is available in the Debian/Ubuntu repository:

apt-get install libapache2-modsecurity

Verify if the mod_security module was loaded.

apachectl -M | grep --color security

You should see a module named security2_module (shared) which indicates that the module was loaded.

Modsecurity’s installation includes a recommended configuration file which has to be renamed:

mv /etc/modsecurity/modsecurity.conf{-recommended,}

Reload Apache

service apache2 reload

You’ll find a new log file for mod_security in the Apache log directory:

root@droplet:~# ls -l /var/log/apache2/modsec_audit.log
-rw-r----- 1 root root 0 Oct 19 08:08 /var/log/apache2/modsec_audit.log

Configuring mod_security


Out of the box, modsecurity doesn’t do anything as it needs rules to work. The default configuration file is set to DetectionOnly which logs requests according to rule matches and doesn’t block anything. This can be changed by editing the modsecurity.conf file:

nano /etc/modsecurity/modsecurity.conf

Find this line

SecRuleEngine DetectionOnly

and change it to:

SecRuleEngine On

If you’re trying this out on a production server, change this directive only after testing all your rules.

Another directive to modify is SecResponseBodyAccess. This configures whether response bodies are buffered (i.e. read by modsecurity). This is only neccessary if data leakage detection and protection is required. Therefore, leaving it On will use up droplet resources and also increase the logfile size.

Find this

SecResponseBodyAccess On

and change it to:

SecResponseBodyAccess Off

Now we’ll limit the maximum data that can be posted to your web application. Two directives configure these:

SecRequestBodyLimit
SecRequestBodyNoFilesLimit

The SecRequestBodyLimit directive specifies the maximum POST data size. If anything larger is sent by a client the server will respond with a 413 Request Entity Too Large error. If your web application doesn’t have any file uploads this value can be greatly reduced.

The value mentioned in the configuration file is

SecRequestBodyLimit 13107200

which is 12.5MB.

Similar to this is the SecRequestBodyNoFilesLimit directive. The only difference is that this directive limits the size of POST data minus file uploads– this value should be “as low as practical.”

The value in the configuration file is

SecRequestBodyNoFilesLimit 131072

which is 128KB.

Along the lines of these directives is another one which affects server performance: SecRequestBodyInMemoryLimit. This directive is pretty much self-explanatory; it specifies how much of “request body” data (POSTed data) should be kept in the memory (RAM), anything more will be placed in the hard disk (just like swapping). Since droplets use SSDs, this is not much of an issue; however, this can be set a decent value if you have RAM to spare.

SecRequestBodyInMemoryLimit 131072

This is the value (128KB) specified in the configuration file.

Testing SQL Injection


Before going ahead with configuring rules, we will create a PHP script which is vulnerable to SQL injection and try it out. Please note that this is just a basic PHP login script with no session handling. Be sure to change the MySQL password in the script below so that it will connect to the database:

/var/www/login.php

<html>
<body>
<?php
    if(isset($_POST['login']))
    {
        $username = $_POST['username'];
        $password = $_POST['password'];
        $con = mysqli_connect('localhost','root','password','sample');
        $result = mysqli_query($con, "SELECT * FROM `users` WHERE username='$username' AND password='$password'");
        if(mysqli_num_rows($result) == 0)
            echo 'Invalid username or password';
        else
            echo '<h1>Logged in</h1><p>A Secret for you....</p>';
    }
    else
    {
?>
        <form action="" method="post">
            Username: <input type="text" name="username"/><br />
            Password: <input type="password" name="password"/><br />
            <input type="submit" name="login" value="Login"/>
        </form>
<?php
    }
?>
</body>
</html>

This script will display a login form. Entering the right credentials will display a message “A Secret for you.”

We need credentials in the database. Create a MySQL database and a table, then insert usernames and passwords.

mysql -u root -p

This will take you to the mysql> prompt

create database sample;
connect sample;
create table users(username VARCHAR(100),password VARCHAR(100));
insert into users values('jesin','pwd');
insert into users values('alice','secret');
quit;

Open your browser, navigate to http://yourwebsite.com/login.php and enter the right pair of credentials.

Username: jesin
Password: pwd

You’ll see a message that indicates successful login. Now come back and enter a wrong pair of credentials– you’ll see the message Invalid username or password.

We can confirm that the script works right. The next job is to try our hand with SQL injection to bypass the login page. Enter the following for the usernamefield:

' or true -- 

Note that there should be a space after -- this injection won’t work without that space. Leave the password field empty and hit the login button.

Voila! The script shows the message meant for authenticated users.

Setting Up Rules


To make your life easier, there are a lot of rules which are already installed along with mod_security. These are called CRS (Core Rule Set) and are located in

root@droplet:~# ls -l /usr/share/modsecurity-crs/
total 40
drwxr-xr-x 2 root root  4096 Oct 20 09:45 activated_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 base_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 experimental_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 lua
-rw-r--r-- 1 root root 13544 Jul  2  2012 modsecurity_crs_10_setup.conf
drwxr-xr-x 2 root root  4096 Oct 20 09:45 optional_rules
drwxr-xr-x 3 root root  4096 Oct 20 09:45 util

The documentation is available at

root@droplet1:~# ls -l /usr/share/doc/modsecurity-crs/
total 40
-rw-r--r-- 1 root root   469 Jul  2  2012 changelog.Debian.gz
-rw-r--r-- 1 root root 12387 Jun 18  2012 changelog.gz
-rw-r--r-- 1 root root  1297 Jul  2  2012 copyright
drwxr-xr-x 3 root root  4096 Oct 20 09:45 examples
-rw-r--r-- 1 root root  1138 Mar 16  2012 README.Debian
-rw-r--r-- 1 root root  6495 Mar 16  2012 README.gz

To load these rules, we need to tell Apache to look into these directories. Edit the mod-security.conf file.

nano /etc/apache2/mods-enabled/mod-security.conf

Add the following directives inside <IfModule security2_module> </IfModule>:

Include "/usr/share/modsecurity-crs/*.conf"
Include "/usr/share/modsecurity-crs/activated_rules/*.conf"

The activated_rules directory is similar to Apache’s mods-enabled directory. The rules are available in directories:

/usr/share/modsecurity-crs/base_rules
/usr/share/modsecurity-crs/optional_rules
/usr/share/modsecurity-crs/experimental_rules

Symlinks must be created inside the activated_rules directory to activate these. Let us activate the SQL injection rules.

cd /usr/share/modsecurity-crs/activated_rules/
ln -s /usr/share/modsecurity-crs/base_rules/modsecurity_crs_41_sql_injection_attacks.conf .

Apache has to be reloaded for the rules to take effect.

service apache2 reload

Now open the login page we created earlier and try using the SQL injection query on the username field. If you had changed the SecRuleEngine directive toOn, you’ll see a 403 Forbidden error. If it was left to the DetectionOnly option, the injection will be successful but the attempt would be logged in the modsec_audit.log file.

Writing Your Own mod_security Rules


In this section, we’ll create a rule chain which blocks the request if certain “spammy” words are entered in a HTML form. First, we’ll create a PHP script which gets the input from a textbox and displays it back to the user.

/var/www/form.php

<html>
    <body>
        <?php
            if(isset($_POST['data']))
                echo $_POST['data'];
            else
            {
        ?>
                <form method="post" action="">
                        Enter something here:<textarea name="data"></textarea>
                        <input type="submit"/>
                </form>
        <?php
            }
        ?>
    </body>
</html>

Custom rules can be added to any of the configuration files or placed in modsecurity directories. We’ll place our rules in a separate new file.

nano /etc/modsecurity/modsecurity_custom_rules.conf

Add the following to this file:

SecRule REQUEST_FILENAME "form.php" "id:'400001',chain,deny,log,msg:'Spam detected'"
SecRule REQUEST_METHOD "POST" chain
SecRule REQUEST_BODY "@rx (?i:(pills|insurance|rolex))"

Save the file and reload Apache. Open http://yourwebsite.com/form.php in the browser and enter text containing any of these words: pills, insurance, rolex.

You’ll either see a 403 page and a log entry or only a log entry based on SecRuleEngine setting. The syntax for SecRule is

SecRule VARIABLES OPERATOR [ACTIONS]

Here we used the chain action to match variables REQUEST_FILENAME withform.php, REQUEST_METHOD with POST and REQUEST_BODY with the regular expression (@rx) string (pills|insurance|rolex). The ?i: does a case insensitive match. On a successful match of all these three rules, the ACTIONis to deny and log with the msg “Spam detected.” The chain action simulates the logical AND to match all the three rules.

Excluding Hosts and Directories


Sometimes it makes sense to exclude a particular directory or a domain name if it is running an application like phpMyAdmin as modsecurity and will block SQL queries. It is also better to exclude admin backends of CMS applications like WordPress.

To disable modsecurity for a complete VirtualHost place the following

<IfModule security2_module>
    SecRuleEngine Off
</IfModule>

inside the <VirtualHost> section.

For a particular directory:

<Directory "/var/www/wp-admin">
    <IfModule security2_module>
        SecRuleEngine Off
    </IfModule>
</Directory>

If you don’t want to completely disable modsecurity, use the SecRuleRemoveById directive to remove a particular rule or rule chain by specifying its ID.

<LocationMatch "/wp-admin/update.php">
    <IfModule security2_module>
        SecRuleRemoveById 981173
    </IfModule>
</LocationMatch>

Further Reading


Official modsecurity documentationhttps://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual

 

 

The post How To Set Up mod_security with Apache on Debian/Ubuntu first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2014/04/13/set-mod_security-apache-debianubuntu/feed/ 0 2841
How To Use MySQL Query Profiling https://www.shineservers.com/2014/04/13/use-mysql-query-profiling/ https://www.shineservers.com/2014/04/13/use-mysql-query-profiling/#respond Sun, 13 Apr 2014 15:34:07 +0000 http://blog.shineservers.com/?p=2839 What is the MySQL slow query log? The MySQL slow query log is a log that MySQL sends slow, potentially problematic queries to. This logging functionality comes with MySQL but is turned off by default. What queries are logged is determined by customizable server variables that allow for query profiling based on an application’s performance […]

The post How To Use MySQL Query Profiling first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
What is the MySQL slow query log?

The MySQL slow query log is a log that MySQL sends slow, potentially problematic queries to. This logging functionality comes with MySQL but is turned off by default. What queries are logged is determined by customizable server variables that allow for query profiling based on an application’s performance requirements. Generally the queries that are logged are queries that take longer than a specified amount of time to execute or queries that do not properly hit indexes.

Setting up profiling variables

The primary server variables for setting up the MySQL slow query log are:

slow_query_log			G 
slow_query_log_file			G 
long_query_time			G / S
log_queries_not_using_indexes	G
min_examined_row_limit		G / S

NOTE: (G) global variable, (S) session variable

slow_query_log – Boolean for turning the slow query log on and off.

slow_query_log_file – The absolute path for the query log file. The file’s directory should be owned by the mysqld user and have the correct permissions to be read from and written to. The mysql daemon will likely be running as `mysql` but to verify run the following in the Linux terminal:

 ps -ef | grep bin/mysqld | cut -d' ' -f1

The output will likely display the current user as well as the mysqld user. An example of setting the directory path /var/log/mysql:

cd /var/log
mkdir mysql
chmod 755 mysql
chown mysql:mysql mysql

long_query_time – The time, in seconds, for checking query length. For a value of 5, any query taking longer than 5s to execute would be logged.

log_queries_not_using_indexes – Boolean value whether to log queries that are not hitting indexes. When doing query analysis, it is important to log queries that are not hitting indexes.

min_examined_row_limit – Sets a lower limit on how many rows should be examined. A value of 1000 would ignore any query that analyzes less than 1000 rows.

The MySQL server variables can be set in the MySQL conf file or dynamically via a MySQL GUI or MySQL command line. If the variables are set in the conf file, they will be persisted when the server restarts but will also require a server restart to become active. The MySQL conf file is usually located in `/etc or /usr`, typically `/etc/my.cnf` or `/etc/mysql/my.cnf`. To find the conf file (may have to broaden search to more root directories):

find /etc -name my.cnf
find /usr -name my.cnf

Once the conf file has been found, simply append the desired values under the [mysqld] heading:

[mysqld]
….
slow-query-log = 1
slow-query-log-file = /var/log/mysql/localhost-slow.log
long_query_time = 1
log-queries-not-using-indexes

Again, the changes will not take affect until after a server restart, so if the changes are needed immediately then set the variables dynamically:

mysql> SET GLOBAL slow_query_log = 'ON';
mysql> SET GLOBAL slow_query_log_file = '/var/log/mysql/localhost-slow.log';
mysql> SET GLOBAL log_queries_not_using_indexes = 'ON';
mysql> SET SESSION long_query_time = 1;
mysql> SET SESSION min_examined_row_limit = 100;

To check the variable values:

mysql> SHOW GLOBAL VARIABLES LIKE 'slow_query_log';
mysql> SHOW SESSION VARIABLES LIKE 'long_query_time';

One drawback to setting MySQL variables dynamically is that the variables will be lost upon server restart. It is advisable to add any important variables that you need to be persisted to the MySQL conf file.

NOTE: The syntax for setting variables dynamically via SET and placing them into the conf file are slightly different, e.g. `slow_query_log` vs. `slow-query-log`. View MySQL’s dynamic system variables page for the different syntaxes. The Option-File Format is the format for the conf file and System Variable Name is the variable name for setting the variables dynamically.

Generating query profile data

Now that the MySQL slow query log configurations have been outlined, it is time to generate some query data for profiling. This example was written on a running MySQL instance with no prior slow log configurations set. The example’s queries can be run via a MySQL GUI or through the MySQL command prompt. When monitoring the slow query log, it is useful to have two connection windows open to the server: one connection for writing the MySQL statements and one connection for watching the query log.

In the MySQL console tab, log into MySQL server with a user who has SUPER ADMIN privileges. To start, create a test database and table, add some dummy data, and turn on the slow query log. This example should be run in a development environment, ideally with no other applications using MySQL to help avoid cluttering the query log as it is being monitored:

$> mysql -u <user_name> -p

mysql> CREATE DATABASE profile_sampling;
mysql> USE profile_sampling;
mysql> CREATE TABLE users ( id TINYINT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) );
mysql> INSERT INTO users (name) VALUES ('Walter'),('Skyler'),('Jesse'),('Hank'),('Walter Jr.'),('Marie'),('Saul'),('Gustavo'),('Hector'),('Mike');mysql> SET GLOBAL slow_query_log = 1;
mysql> SET GLOBAL slow_query_log_file = '/var/log/mysql/localhost-slow.log';
mysql> SET GLOBAL log_queries_not_using_indexes = 1;
mysql> SET long_query_time = 10;
mysql> SET min_examined_row_limit = 0;

There is now a test database and table with a small amount of test data. The slow query log was turned on but the query time was intentionally set high and the minimum row examined flag kept off. In the console tab for viewing the log:

cd /var/log/mysql
ls -l

There should be no slow query log in the folder yet, as no queries have been run. If there is, that means that the slow query log has been turned on and configured in the past, which may skew some of this example’s results. Back in the MySQL tab, run the following SQL:

mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE id = 1;

The query executed was a simple select using the Primary Key index from the table. This query was fast and used an index, so there will be no entries in the slow query log for this query. Look back in the query log directory and verify that no log was created. Now back in your MySQL window run:

mysql> SELECT * FROM users WHERE name = 'Jesse';

This query was run on a non indexed column – name. At this point there will be a query in the log with the following info (may not be exactly the same):

/var/log/mysql/localhost-slow.log

# Time: 140322 13:54:58
# User@Host: root[root] @ localhost []
# Query_time: 0.000303  Lock_time: 0.000090 Rows_sent: 1  Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = 'Jesse';

The query has been successfully logged. One more example. Raise the minimum examined row limit and run a similar query:

mysql> SET min_examined_row_limit = 100;
mysql> SELECT * FROM users WHERE name = 'Walter';

No data will be added to the log because the minimum of 100 rows was not analyzed.

NOTE: If there is no data being populated into the log, there are a couple of things that can be checked. First the permissions of the directory in which the log is being created in. The owner/group should be the same as the mysqld user (see above for example) as well as have correct permissions, chmod 755 to be sure. Second, there may have been existing slow query variable configurations that are interfering with the example. Reset the defaults by removing any slow query variables from the conf file and restarting the server, or set the global variables dynamically back to their default values. If the changes are made dynamically, logout and log back into MySQL to ensure the global updates take effect.

 

Analyzing query profile information

Looking at the query profile data from the above example:

# Time: 140322 13:54:58
# User@Host: root[root] @ localhost []
# Query_time: 0.000303  Lock_time: 0.000090 Rows_sent: 1  Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = 'Jesse';

The entry displays:

  • Time at which the query was ran
  • Who ran it
  • How long the query took
  • Length of the lock
  • How many rows where returned
  • How many rows where examined

This is useful because any query that violates the performance requirements specified with the server variables will end up in the log. This allows a developer, or admin, to have MySQL alert them when a query is not performing as well as it should [opposed to reading through source code and trying to find poorly written queries]. Also, the query profiling data can be useful when it is profiled over a period of time, which can help determine what circumstances are contributing to poor application performance.

Using mysqldumpslow

In a more realistic example, profiling would be enabled on a database driven application, providing a moderate stream of data to profile against. The log would be continually getting written to, likely more frequently than anybody would be watching. As the log size grows, it becomes difficult to parse through all the data and problematic queries easily get lost in the log. MySQL offers another tool, mysqldumpslow, that helps avoid this problem by breaking down the slow query log. The binary is bundled with MySQL (on Linux) so to use it simply run the command and pass in the log path:

mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

There are various parameters that can be used with the command to help customize output. In the above example the top 5 queries sorted by the average query time will be displayed. The resulting rows are more readable as well as grouped by query (this output is different from the example to demonstrate high values):

 

Count: 2  Time=68.34s (136s)  Lock=0.00s (0s)  Rows=39892974.5 (79785949), root[root]@localhost
  SELECT PL.pl_title, P.page_title
  FROM page P
  INNER JOIN pagelinks PL
  ON PL.pl_namespace = P.page_namespace
  WHERE P.page_namespace = N
…

The data being displayed:

  • Count – How many times the query has been logged
  • Time – Both the average time and the total time in the ()
  • Lock – Table lock time
  • Rows – Number of rows returned

The command abstracts numbers and strings, so the same queries with different WHERE clauses will be counted as the same query (notice the page_namespace = N). Having a tool like mysqldumpslow prevents the need to constantly watch the slow query log, instead allowing for periodic or automated checks. The parameters to the mysqldumpslow command allow for some complex expression matching which help drill down into the various queries in the log.

There are also 3rd party log analysis tools available that offer different data views, a popular one being pt-query-digest.

Query breakdown

One last profiling tool to be aware of is the tool which allows for a complex break down of a query. A good use case for the tool is grabbing a problematic query from the slow query log and running it directly in MySQL. First profiling must be turned on, then the query is ran:

mysql> SET SESSION profiling = 1;
mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE name = 'Jesse';
mysql> SHOW PROFILES;

After profiling has been turned on, the SHOW PROFILES will show a table linking a Query_ID to a SQL statement. Find the Query_ID corresponding to the query ran and run the following query (replace # with your Query_ID):

mysql> SELECT * FROM INFORMATION_SCHEMA.PROFILING WHERE QUERY_ID=#;

Sample Output:

SEQ STATE DURATION
1 starting 0.000046
2 checking permissions 0.000005
3 opening tables 0.000036

The STATE is the “step” in the process of executing the query, and the DURATION is how long that step took to complete, in seconds. This isn’t an overly useful tool, but it is interesting and can help determine what part of the query execution is causing the most latency.

For a detailed outline of the various columns:http://dev.mysql.com/doc/refman/5.5/en/profiling-table.html

For a detailed overview of the various “steps”:http://dev.mysql.com/doc/refman/5.5/en/general-thread-states.html

NOTE: This tool should NOT be used in a production environment rather for analyzing specific queries.

Slow query log performance

One last question to address is how the slow query log will affect performance. In general it is safe to run the slow query log in a production environment; neither the CPU nor the I/O load should be a concern ¹ ². However, there should be some strategy for monitoring the log size to ensure the log file size does not get too big for the file system. Also, a good rule of thumb when running the slow query log in a production environment is to leave long_query_time at 1s or higher.

IMPORTANT: It is not a good idea to use the profiling tool, SET profiling=1, nor to log all queries, i.e. the general_log variable, in a production, high workload environment.

Conclusion

The slow query log is extremely helpful in singling out problematic queries and profiling overall query performance. When query profiling with the slow query log, a developer can get an in-depth understanding of how an application’s MySQL queries are performing. Using a tool such as mysqldumpslow, monitoring and evaluating the slow query log becomes manageable and can easily be incorporated into the development process. Now that problematic queries have been identified, the next step is to tune the queries for maximum performance.

The post How To Use MySQL Query Profiling first appeared on Shine Servers: Illuminating IT Solutions Since 2012.

]]>
https://www.shineservers.com/2014/04/13/use-mysql-query-profiling/feed/ 0 2839