Monday, July 11, 2016

Useful Linux Commands


Tarring:
 ​​tar -pczf tar_file_name.tar.gz folder_name
Untarring:  ​​
tar -zxvf tar_file_name.tar.gz
Zipping:
  zip -r zip_file_name.zip folder_name
Unzipping:
 unzip zip_file_name.zip -d folder_name

Create a symlink:
 ln -s src_file_link symlink_file_name
Remove a symlink:
 unlink link_name

SCP 

​​​​ scp rondovu_1_0_24_sep_2011.tar.gz root@dev.myserver.com:/var/www/html/
tar -zxvf tar_file_name.tar.gz

Rsync
​​ rsync --partial --progress --rsh=ssh /home/user/tar_file_name.tar.gz root@dev.myserver.com:/var/www/tar_file_name.tar.gz
​​
List the size of files/folders in a directory
​​ du --max-depth=1 -h ./ | sort -n -r

Count the lines of code in a project
 find . -name '*.php' | xargs wc -l
​​ find . -name '*.js' | xargs wc -l

Search for a text in files with regex
 egrep "User|Group|SuexecUserGroup" /etc/apache2/apache2.conf /etc/apache2/sites-available/*.conf

Know System Info (whether 32 or 64 bit)
 file /usr/bin/file

Mysql dump and restore
​​mysqldump -u root -p db_name > db_name.sql
mysqldump -u root -p --datbases db_name1 db_name2 db_name3 > db_name.sql
mysqldump -u root -p --all-databases > db_name.sql
mysql -u root -p db_name < db_name.sql
mysql --force -u root -p db_name < db_name.sql // To import without stopping when there are no errors

Analyse Apache Access Log to view traffic


To view requests per day
awk '{print $4}' access_log | cut -d: -f1 | uniq -c

To view requests per hour
grep "12/Jul/2016" access_log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort -n | uniq -c


To view requests per minute
grep "12/Jul/2016:11" access_log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c | awk '{ if ($1 > 10) print $0}'

If your logs are gzip'ed by logrotate, prefix another command to the above commands 
zcat /site/mysite.com/logs/access.log.gz | awk '{print $4}' | cut -d: -f1 | uniq -c

Saturday, September 21, 2013

MySQL - Create a new user



Connect to MySQL server - 
mysql -u root -p -h hostname

Execute the below command from mysql client
CREATE USER 'user_name'@'%' IDENTIFIED BY 'pwd';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, INDEX  ON db_name.* TO 'user_name'@'%';

We can add more privileges  like DROP if the user/application needs to drop any tables.




Saturday, June 30, 2012

MemSQL - HipHop for SQL

MemSQL is a new database system developed by ex-facebookers to speed up relational databases. As its name implies, it achieves fast performance by keeping data in memory by removing the most common bottleneck "disk" which most applications hit today. Also it supports MySQL client for querying so it doesn't need you to learn anything new and you can easily migrate from MySQL to MemSQL without any change in your application code.

MemSQL takes a lesson learned from HipHop (facebook's tool for converting PHP code into faster C++) and converts SQL to C++. Unlike traditional relational database systems which interpret SQL queries, MemSQL transforms SQL to C++ and compiles the generate code into machine code. This is only one time operation when a query hits first time, and all the future queries which follow the same skeleton (with all the numeric and string parameters stripped off) will bypass code generation and the compiler entirely.

For example, when you execute a query
  SELECT * FROM users WHERE id = 10 and name = "prasad",
it is first scanned through a linear scan parser which strips of all the numeric and string parameters and creates a query skeleton. The resulting query skeleton is just a string and looks like
 SELECT * FROM users WHERE id =@ and name = ^.
This query skeleton is given to code generator which converts it into C++ code that executes the query. The resulting plan is loaded into the database as a shared object and registered in the plan cache.
And all the future queries which match the same skeleton will be directly executed without going through code generation and compilation process.

Having said that MemSQL uses RAM as the primary storage of data, it backs up data to disk with snapshots and transaction logs to not to lose any data on system restart or failure. So your data is durable. These features can be tuned all the way from synchronous durability (every write transaction is recorded on disk before the query completes) to purely in-memory durability (maximum sustained throughput on writes). 

So why wait? I have installed in MemSQL in my 64-bit linux machine today and connected it to using phpmyadmin within 10 minutes. 


Click here to download the developer edition and install it by yourself. 


~ Cheers

Ubuntu Security updates

Executing the below commands installs all updates ( includes security updates also)
sudo apt-get update
sudo apt-get upgrade

There is a package unattended-upgrades, which provides functionality to install security updates automatically.
You could use this, but not configure the automatic part and call it manually instead, so the following should do it:
sudo unattended-upgrade  
(assuming the package is installed by default, which probably is)
See also /usr/share/doc/unattended-upgrades/README.

AWS - Delete AutoScaling

1. Update your active auto scaling group with miz-size 0 and max size 0 with the below command. You should have AWS auto scaling command line tools installed.  You can follow my blog post  to setup command line tools.
$ as-update-auto-scaling-group prod-scaling-group --min-size 0 --max-size 0

2. Wait for the instances to shutdown.

3. And now delete your auto scaling group
$as-delete-auto-scaling-group pd-prod-scaling-group

Upgrade NGINX server on Ubuntu

sudo service nginx stop
sudo add-apt-repository ppa:nginx/stable
sudo apt-get update  
sudo apt-get install nginx
Configuration file `/etc/nginx/sites-available/default'   ==> Modified (by you or by a script) since installation.   ==> Package distributor has shipped an updated version.     What would you like to do about it ?  Your options are:      Y or I  : install the package maintainer's version      N or O  : keep your currently-installed version        D     : show the differences between the versions        Z     : background this process to examine the situation   The default action is to keep your current version.  *** default (Y/I/N/O/D/Z) [default=N] ?
Choose Option = N
 reboot

Amazon RDS - MySQL - Turn on Slow Log and General Log

Following commands create a new db parameter group for your RDS server and set 
slow_query_log and general_log parameters


You will need to setup RDS tools configured with your keys to execute the below commands


$ rds-create-db-parameter-group slow-log-group -f mysql5.1 -d "This is created to enable slow query logging and a few other db parameters"

$ rds-modify-db-instance pd-rds-server --db-parameter-group-name slow-log-group --apply-immediately

$ rds-modify-db-parameter-group slow-log-group --parameters "name=slow_query_log, value=ON, method=immediate" --parameters "name=long_query_time, value=1, method=immediate" --parameters "name=min_examined_row_limit, value=100, method=immediate" "name=general_log, value=ON, method=immediate"

$ rds-describe-db-parameters slow-log-group

$ rds-reboot-db-instance pd-rds-server 

Setting up git client for AWS EBS environment in windows machine

Prerequisites
1. Git should be installed
2. Powershell 2.0 should be installed
3. AWSDevTools-OnetimeSetup.bat must be run.


Setup
1. Create a folder
2. Git Init in the  folder
3. Copy AWSDevTools-RepositorySetup.bat to the current folder
4. Run AWSDevTools-RepositorySetup.bat
5. Delete AWSDevTools 
3. Open Git Bash and cd to the current folder
4. Execute aws.config 
 - Enter AWS credentials
 - Enter AWS Beanstalk application name
 - Enter AWS Beanstalk environment name
5. git add . 
6. git commit -m " Notes here "
7. git aws.push


Openfire chat server installation

Execute the below command to setup Openfire on ubuntu system:

   1  sudo add-apt-repository ppa:sun-java-community-team/sun-java6 
   2 sudo apt-get update
   3  sudo apt-get upgrade
   4  sudo apt-get install sun-java6-jre
   5  sudo wget http://www.igniterealtime.org/downloadServlet?filename=openfire/openfire_3.7.1_all.deb
   6  sudo mv downloadServlet\?filename\=openfire%2Fopenfire_3.7.1_all.deb openfire_3.7.1_all.deb
   7  sudo dpkg -i openfire_3.7.1_all.deb

On Redhat, CentOS, use the below commands

  2 rpm -ivh openfire-3.7.1-1.i386.rpm


- Open http://yourserver.com:9090 from your browser and complete setup process such as db integration, domain etc.