Have you ever measured the amount of changes you have in your database. Sometimes it can be a good idea to know. Maybe you need to set up a standby database and you need details about the traffic between the two system. Or maybe you plan to move the primary database and need to evaluate the different options. This will vary quite a bit from day to day, but one query that level this out on a monthly basis can be as shown below.
Multiply the count with the average size of your archive logs and you will have an idea.
Now may also be a good time to evaluate the number of log switches you have in your database. Every minute is too frequent. Every 10 minute to every hour is a more optimal configuration. Consider to resize the log files (ie create a new set of logfiles and disable the old ones) if you are far off.
select to_char(first_time, 'yyyy-mm'), count(*)
from V$log_history
group by to_char(first_time, 'yyyy-mm')
order by 1;
Welcome to my dba blog. Here you can find Linux and Oracle stuff for daily task operations. Click on the picture above to get home. Click on the posts for more details. Anything found here can be used freely, but at your own risk of course. Please tailor scripts and commands to meet your own installation.
Friday, 19 January 2018
Sunday, 14 January 2018
mongoDB daemon log file
Do you have a very large mongoDB daemon log file?
You can install a log rotate package, or you can implement some dead simple bash script that will do more or less the same job. This is what this blog post is about.
Make sure the script is adjusted for your installation before running it. If you have a very large daemon log file, many gb, you might want to just keep a portion of this for the first run. That's the idea of the variable MAX (nbo lines). Do some checks up front to find a good value for this parameter. Default here is set to 20000 lines. If you do a "tail -20000 mongod.log|more" you will get an idea of how many days back that will evaluate to. If you want to preserve the entire file you really should copy the file up front and store it on a filesystem that has capacity to hold it.
You can install a log rotate package, or you can implement some dead simple bash script that will do more or less the same job. This is what this blog post is about.
Make sure the script is adjusted for your installation before running it. If you have a very large daemon log file, many gb, you might want to just keep a portion of this for the first run. That's the idea of the variable MAX (nbo lines). Do some checks up front to find a good value for this parameter. Default here is set to 20000 lines. If you do a "tail -20000 mongod.log|more" you will get an idea of how many days back that will evaluate to. If you want to preserve the entire file you really should copy the file up front and store it on a filesystem that has capacity to hold it.
And for God's sake, make sure the variable LOGFILE actually points to the daemon logfile and not some file that is part of the database.
This script can easily be tailored to rotate any type of logs.
Make sure the os user has access rights to create and operate on the archive directory.
#/bin/bash
# by kp
# adjust the variable MAX to suit your needs
# it is ment for those of you with very large log
# install this in cron on a weekly basis
# 00 07 * * 0 bash /root/mongod_clean.bash > /root/mongod_clean.out
# consider the -- quiet switch in mongoDB to reduce the amount of log info
export LOGFILE=/var/log/mongodb/ mongod.log
export ARCHIVE=/var/log/mongod_log_ archive
export TODAY=`date +"%Y%m%d"`
export DAYS=30
export MAX=20000
if [ ! -d $ARCHIVE ]
then
mkdir $ARCHIVE
fi
if [ -f $ARCHIVE/mongod.log.$TODAY ]
then
echo "already cleaned today, rename copy from today under $ARCHIVE"
exit 1
fi
if [ -f $LOGFILE ]
then
lines=`wc -l $LOGFILE |awk '{ print $1 }'`
if [ $lines -gt $MAX ]
then
tail -$MAX $LOGFILE > $ARCHIVE/mongod.log.$TODAY
else
cp -p $LOGFILE $ARCHIVE/mongod.log.$TODAY
fi
cat /dev/null > $LOGFILE
find $ARCHIVE -type f -mtime +$DAYS -name 'mongod.log.*' |xargs rm -f
echo "$LOGFILE truncated ok"
exit 0
else
echo "$LOGFILE missing"
exit 1
fi
Friday, 12 January 2018
create rman logs when using oem
Have you ever thought, in oem, where are the backup logs? Interested? Read on.
When using rman from oem all your logs will be preserved inside the rman catalog database. And oem will remove old logs according to it's policy. This is just an example how to pull out the logs from oem and keep them on the filesystem. Also grepping for errors and missing backups.
Normally you would add some extra stuff to cron and send off a daily mail to sys admin and possibly only preserve a number of days with logs on your main directory. If you have many databases it will add up quite a bit. I can add this to a later blog post.
The trick with adding a constant to your column list, and later grep for the constant can be used in many scripts.
First we need a place to store the logs.
mkdir /var/log/rman_logs
Please modify according to your own environment.
#/bin/bash
# by kp, to be run as the ora owner on the ora server
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4
export ORACLE_SID=RMAN
export HOST=`hostname`
export OUTPUT=/var/log/rman_logs
export TODAY=`date +"%Y%m%d"`
export MISSING=$OUTPUT/missing-$TODAY.lst
export ERRORS=$OUTPUT/errors-$TODAY.lst
export STATUS=0
-- add a where clause to the select if you want to omit any of the dbs
$ORACLE_HOME/bin/sqlplus /nolog <<eol|grep db:|cut -d ':' -f2 > $OUTPUT/rman_db.list
connect / as sysdba
select distinct 'db:'||name from rman.rc_database;
exit
eol
for i in `cat $OUTPUT/rman_db.list`;do
$ORACLE_HOME/bin/sqlplus /nolog <<eol > /dev/null
connect / as sysdba
column tm new_value file_time noprint;
select to_char(sysdate,'YYYYMMDD') tm from dual;
spool $OUTPUT/$i-&file_time
select output from rman.rc_rman_output where session_key in
(select session_key from rman.rc_rman_backup_job_details
where db_name='$i' and
start_time > sysdate-1 and
input_type <> 'ARCHIVELOG')
order by recid;
exit
eol
done
echo "---------------------------------------------"
echo "this status is sent from cron on $HOST, ora user"
echo ""
echo "output under $OUTPUT"
echo ""
echo "check rman_status_$TODAY.lst"
grep -l "no rows" $OUTPUT/*-$TODAY.lst > $MISSING
STATUS=$?
if [ "$STATUS" -eq 0 ]
then
echo ""
echo "rman backup missing for some databases"
echo ""
cat $MISSING
echo ""
echo "consider a manual backup for those missing"
else
rm -f $MISSING
fi
egrep -i 'error|rman-' $OUTPUT/*-$TODAY.lst > $ERRORS
STATUS=$?
if [ "$STATUS" -eq 0 ]
then
echo ""
echo "rman errors detected"
echo ""
echo "check $ERRORS"
STATUS=1
else
rm -f $ERRORS
STATUS=0
fi
echo ""
echo "---------------------------------------------"
exit $STATUS
When using rman from oem all your logs will be preserved inside the rman catalog database. And oem will remove old logs according to it's policy. This is just an example how to pull out the logs from oem and keep them on the filesystem. Also grepping for errors and missing backups.
Normally you would add some extra stuff to cron and send off a daily mail to sys admin and possibly only preserve a number of days with logs on your main directory. If you have many databases it will add up quite a bit. I can add this to a later blog post.
The trick with adding a constant to your column list, and later grep for the constant can be used in many scripts.
First we need a place to store the logs.
mkdir /var/log/rman_logs
Please modify according to your own environment.
#/bin/bash
# by kp, to be run as the ora owner on the ora server
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4
export ORACLE_SID=RMAN
export HOST=`hostname`
export OUTPUT=/var/log/rman_logs
export TODAY=`date +"%Y%m%d"`
export MISSING=$OUTPUT/missing-$TODAY.lst
export ERRORS=$OUTPUT/errors-$TODAY.lst
export STATUS=0
-- add a where clause to the select if you want to omit any of the dbs
$ORACLE_HOME/bin/sqlplus /nolog <<eol|grep db:|cut -d ':' -f2 > $OUTPUT/rman_db.list
connect / as sysdba
select distinct 'db:'||name from rman.rc_database;
exit
eol
for i in `cat $OUTPUT/rman_db.list`;do
$ORACLE_HOME/bin/sqlplus /nolog <<eol > /dev/null
connect / as sysdba
column tm new_value file_time noprint;
select to_char(sysdate,'YYYYMMDD') tm from dual;
spool $OUTPUT/$i-&file_time
select output from rman.rc_rman_output where session_key in
(select session_key from rman.rc_rman_backup_job_details
where db_name='$i' and
start_time > sysdate-1 and
input_type <> 'ARCHIVELOG')
order by recid;
exit
eol
done
echo "---------------------------------------------"
echo "this status is sent from cron on $HOST, ora user"
echo ""
echo "output under $OUTPUT"
echo ""
echo "check rman_status_$TODAY.lst"
grep -l "no rows" $OUTPUT/*-$TODAY.lst > $MISSING
STATUS=$?
if [ "$STATUS" -eq 0 ]
then
echo ""
echo "rman backup missing for some databases"
echo ""
cat $MISSING
echo ""
echo "consider a manual backup for those missing"
else
rm -f $MISSING
fi
egrep -i 'error|rman-' $OUTPUT/*-$TODAY.lst > $ERRORS
STATUS=$?
if [ "$STATUS" -eq 0 ]
then
echo ""
echo "rman errors detected"
echo ""
echo "check $ERRORS"
STATUS=1
else
rm -f $ERRORS
STATUS=0
fi
echo ""
echo "---------------------------------------------"
exit $STATUS
Thursday, 11 January 2018
oem agent
This assumes you already have an OEM server running at your site. You don't need to download the client agent package cause you already have it embedded into your oem server. Here is what you need to do:
- pull out the software package from your OEM server.
- emcli get_agentimage
- unzip the image
- edit respons file with your settings
- run agentDeploy.sh
- define the registration pw on OEM server
- use proc mgmt_admin.cleanup_agent to remove the agent
Wednesday, 10 January 2018
dba
As a start of the new year I figured now is the time to share some Linux and Oracle stuff for daily task operations. I work as a professional dba, and has done for the last 20 years or more. Mostly Oracle, Linux and bash scripting, but also a lot of other weird stuff. If I think someone else can use my stuff I will share it here.
Lets see if I can keep this blog alive along with my photo blog :)
Lets see if I can keep this blog alive along with my photo blog :)
Subscribe to:
Posts (Atom)