Tuesday, 28 August 2018

oracle audit log configuration

This post is about controlling the audit log on your system, at least the ones that goes into your database. The audit trails located on your fs is not covered here. Your vendor will normally set up an audit  configuration, but don't count on it. Worst case, they might not set up anything at all. A default oracle 11g install will set up standard audit trail to log at  least logon / logoff events, and even this will add up quite a bit over time. By default your audit logs will be added to your system tablespace, and really,  they should not be there.

So, if you have a running database, that has been there a while, this post might be worth while reading. If you have a brand new database, or your audit data is lets say, less than a GB, you can more or less  just follow the instructions.

If you have large amounts of audit data, many GB, you might need to do this in steps. Be aware that you can get extensive archive logging. As a rule of thumb, around the size of the portion you are cutting away. So use the queries below to do the math. Post me questions if you like :)

This example only keep audit logs online for 90 days. If you need to go back more than 90 days you must restore a copy of the database to cover you needs.


First of all, you need a good overview of your existing audit data.

select segment_name,round(bytes/1024/1024) from dba_segments where segment_name in ('AUD$', 'FGA_LOG$');

select table_name, tablespace_name from dba_tables where table_name in ('AUD$', 'FGA_LOG$') ORDER BY table_name;

NOTE! for test db's you might see entries in the audit log belonging to other dbid's. Purging will only affect rows belonging to the active dbid.

-- select dbid,count(*) from aud$ group by dbid;

// Find the oldest entries
-- select dbid,min(NTIMESTAMP#) from aud$ group by dbid;

// Check out historic number of rows per month and use this to calculate the space you need to allocate for the future.
-- select dbid,to_char(NTIMESTAMP#, 'yyyy-mm'), count(*) from aud$ group by dbid,to_char(NTIMESTAMP#, 'yyyy-mm') order by 1,2;

// Check out recent events trapped in the audit log
column terminal format a20
column action_name format a10
column userhost format a30
column os_username format a20
-- column extended_timestamp format a40
select action_name,returncode,username,count(*)
from dba_audit_session where timestamp>(sysdate-7) group by action_name,returncode,username;

NOTE! Use the size from the first query and create new tablespace with initial 2 x the size of AUD$

-- create tablespace audit_aux datafile '/u02/oradata/your-db/audit_aux01.dbf' size 200m autoextend on next 100m maxsize 4000m;

/* The following will physically move the AUD$ table from what you had ( default system ) to your new tablespace. */

BEGIN
  DBMS_AUDIT_MGMT.set_audit_trail_location(
    audit_trail_type           => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
    audit_trail_location_value => 'AUDIT_AUX');
END;
/

/* The following will physically move the FGA_LOG$ table from what you had ( default system ) to your new tablespace. */

BEGIN
  DBMS_AUDIT_MGMT.set_audit_trail_location(
    audit_trail_type           => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
    audit_trail_location_value => 'AUDIT_AUX');
END;
/

// Check if already initialized, if not do it.
set pagesize 150
set linesize 200
column parameter_name format a30
column parameter_value format a20
SELECT * FROM DBA_AUDIT_MGMT_CONFIG_PARAMS;

SET SERVEROUTPUT ON
BEGIN
  IF DBMS_AUDIT_MGMT.is_cleanup_initialized(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD) THEN
    DBMS_OUTPUT.put_line('YES');
  ELSE
    DBMS_OUTPUT.put_line('NO');
  END IF;
END;
/

SET SERVEROUTPUT ON
BEGIN
  IF DBMS_AUDIT_MGMT.is_cleanup_initialized(DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD) THEN
    DBMS_OUTPUT.put_line('YES');
  ELSE
    DBMS_OUTPUT.put_line('NO');
  END IF;
END;
/

/* This will take both types located in db, aud$ and fga_log$
NOTE! if you have not moved the the audit tables to the new database, this will trigger a move. The destination will be as specified by: SELECT * FROM DBA_AUDIT_MGMT_CONFIG_PARAMS; */

BEGIN
  DBMS_AUDIT_MGMT.init_cleanup(
    audit_trail_type         => DBMS_AUDIT_MGMT.AUDIT_TRAIL_DB_STD,
    default_cleanup_interval => 24 /* hours */);
END;
/

// Check to see if "last archived audit timestamp" is set
-- select * FROM dba_audit_mgmt_last_arch_ts;

// If not, set it, 90 will preserve audit logs online for three months
BEGIN
  DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
    last_archive_time => SYSTIMESTAMP-90);
END;
/

BEGIN
  DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
    last_archive_time => SYSTIMESTAMP-90);
END;
/

// Create jobs to purge the audit logs

begin
   dbms_audit_mgmt.create_purge_job (
   audit_trail_type            => dbms_audit_mgmt.audit_trail_aud_std,
   audit_trail_purge_interval  => 24,
   audit_trail_purge_name      => 'std_audit_trail_purge_job',
   use_last_arch_timestamp     => TRUE
   );
end;
/

begin
   dbms_audit_mgmt.create_purge_job (
   audit_trail_type            => dbms_audit_mgmt.audit_trail_fga_std,
   audit_trail_purge_interval  => 24,
   audit_trail_purge_name      => 'fga_audit_trail_purge_job',
   use_last_arch_timestamp     => TRUE
   );
end;
/

// Create a job to alter the preserved date to 90 days
BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'audit_last_archive_time',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE
      l_days NUMBER := 90;
      BEGIN
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(DBMS_AUDIT_MGMT.audit_trail_aud_std, TRUNC(SYSTIMESTAMP)-l_days);
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(DBMS_AUDIT_MGMT.audit_trail_fga_std, TRUNC(SYSTIMESTAMP)-l_days);
      END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Automatically set audit last archive time.');
END;
/

// if you ever need it, jobs can be kicked off manually
begin
   dbms_scheduler.run_job (job_name => 'audit_last_archive_time');
end;
/

// to list the audit jobs only
select * from DBA_AUDIT_MGMT_CLEANUP_JOBS;

// to list all jobs
select owner||'.'||job_name||'.'||start_date||'.'||next_run_date||'.'||state from dba_scheduler_jobs where enabled='TRUE';

Friday, 20 July 2018

Script already running - bash

For scripts with long execution time it can be a good idea to check if the script is already running. This to avoid mess and unnecessary load on server caused by multiple instances of the same script.

Add the following code to the script header:

already=`ps -ef | grep "$0" | grep -v grep|wc -l`
if [ $already -gt 1 ]
then
   echo "$0 already running"
   exit 1
fi

Monday, 16 July 2018

Clickable headers in PHP

This is a tiny example of how to make header columns clickable. The code is listing the content of the table my_table in a MySQL database called myDb. It can be a good idea to add a link to your main menu  as well, to avoid getting lost :)


my.php:

<!DOCTYPE html>
<html>
<head>
<style>
table, th, td {
    border: 1px solid black;
}
</style>
</head>
<body>
<?php

$servername = "myServer";
$username = "me";
$password = "myPw";
$dbname = "myDb";
$nbo = 0;

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);

// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}

if (!$conn->set_charset("utf8")) {
    printf("Error loading character set utf8: %s\n", $mysqli->error);
    exit();
}
// to prevent getting totally lost
echo '<a href="index.html">HOME</a>';
$column='LastLogon';
$sort='desc';

if(isset($_GET['sorting'])) {
  if($_GET['sorting']=='asc')
     $sort='desc';
  else
     $sort='asc';
}

if($_GET['column']=='Name')
   $column = "Name";
elseif($_GET['column']=='Company')
   $column = "Company";
elseif($_GET['column']=='LastLogon')
   $column = "LastLogon";

$sql = "select Name,Company,Department,LastLogon from my_table order by $column $sort";
$result = $conn->query($sql);
// now, instead of having just the column name in the table header, you must add a link and pass the column name
if ($result->num_rows > 0) {
    echo '<table><tr> <th><a href="my.php?sorting='.$sort.'&column=Name">Name</a></th> <th><a href="my.php?sorting='.$sort.'&column=Company">Company</a></th>  <th>Department</th> <th> <a href="my.php?sorting='.$sort.'&column=LastLogon">LastLogon</a> </th> </tr>';
    // output data of each row
    while($row = $result->fetch_assoc()) {
        $nbo++;
        echo "<tr><td>" . $row["Name"]. "</td><td>" . $row["Company"] . "</td><td>" . $row["Department"] .  "</td><td>" . $row["LastLogon"] . "</td></tr>";
    }
    echo "</table>";
    echo "<br>" . $nbo . " rows";
} else {
    echo "0 results";
}

$conn->close();
?>
</body>
</html>

Friday, 25 May 2018

Troubleshoot oracle em 12c agent

If your oracle em agent refuse to start on your server and you see error "out of heap/memory", try to move all files under the state directory to a temp directory.

  • make sure the agent is all dead, no java procs or anything 
  • mkdir /tmp/agent_state.bad 
  • mv $ORACLE_BASE/product/em/sysman/emd/state/* /tmp/agent_state.bad 
  • now start the agent again, as root: /etc/init.d/gcstartup start
  • use emctl if you have the oms server running on the same server

Monday, 23 April 2018

Expanding an oracle standby database

If you have an application using both a primary and a secondary database you might experienced the error  ORA-279. This can occur if you expand your primary database with new data files without updating your standby database.

When the archive log containing the expansion gets applied to your standby database you will see the following error in your alert log:

Parallel Media Recovery started
ORA-279 signalled during: ALTER DATABASE RECOVER  database until time '2018-04-23:09:12:17' using backup controlfile  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT
Media Recovery Log /archive-file
Mon Apr 23 09:44:37 2018
File #72 added to control file as 'UNNAMED00072'. Originally created as:
'newly-added-file.dbf'
Recovery interrupted!

The fix to this problem is to tell the standby database about the new file.

Standby database> sqlplus / as sysdba
  • alter database create datafile 'UNNAMED00072' as 'newly-added-file.dbf';

And now re-run the recover process. It should pass with no errors. The UNNAMED# will of course vary for each install. Use your alert log for the exact info.

Tuesday, 10 April 2018

Oracle read only account

Create a new read only user or grant rights to existing user
  • create user new-user identified by pwd;
  • grant connect, create session, select any table to new-user;
Now you can select any object by prefixing with owner. If you want to browse dictionary objects you also need
  • grant select any dictionary to new-user;
For permanent users I always prefer to give specific access only to the objects needed and create synonyms to point to these objects. This gives a more generic source and also gives a better documentation of what this account is used for. And of course revoke the select any table right.

Tuesday, 13 March 2018

Expired oracle accounts - GRACE period

Have you ever experienced expired oracle users? Or perhaps just about and seen the user enter a GRACE period. This is a common mechanism to ensure that users renew their passwords and follow specified security profiles. But if a service account expires, applications might begin to fail. If you have a monitoring tool that allows you to add your own checks, it can be a good idea to look for accounts that are about to expire, to catch the situation before it becomes a problem. The reason that a service account expires may be that it has a wrong profile. So verify the profile as well. However, it does not take long to make a check that captures all databases. Before an account expires, it will enter a GRACE period. This gives you some time to adjust. In its simplest form, a check may look something like this. Install in cron and run every morning.

#!/bin/bash

 script_dir="/your_scripts"
 logfile="grace.log"
 ORACLE_BASE="/your_oracle"
 receivers="your-email-address"
 running_db=`ps -ef|grep smon|grep -v grep|awk '{print $8}'|awk -F "ora_smon_" '{print $2}'`

 export ORACLE_BASE

 date > $script_dir/$logfile

 for i in `echo $running_db`;do
    ORACLE_SID=$i
    ORAENV_ASK=NO
    . /usr/local/bin/oraenv 1> /dev/null 2> /dev/null

    $ORACLE_HOME/bin/sqlplus /nolog <$script_dir/grace.sql |grep tag: > $script_dir/$i.grace

    grep "GRACE" $script_dir/$i.grace 1> /dev/null 2> /dev/null
    found="$?"

    if [ "$found" -eq 0 ]
    then
       echo "$i contain one or more accounts in grace mode" >>  $script_dir/$logfile
       cat $script_dir/$i.grace >> $script_dir/$logfile
    else
       rm -f $script_dir/$i.grace
    fi
 done

 nbo_lines=`cat $script_dir/$logfile|wc -l`

 if [ "$nbo_lines" -gt 1 ] then
    cat $script_dir/$logfile | mailx -s "Oracle warning - accounts in GRACE mode" $receivers
 fi

 exit 0

grace.sql:
 connect
 select 'tag:'||username,account_status from dba_users where account_status like '%GRACE%';

Wednesday, 7 March 2018

Search for user information

You may have reasons to search for user information in your databases. Maybe you need to know the source of all clients. Lets say you want to move one or more of your databases. Then you need to find all the associated clients and alter the config. A query, in oracle, to view logged on users will only give you the activity right now.
  • select username||'.'||osuser|'.'||machine||'.'||program||'.'||process from v$session where type='USER'
If you want to get the complete picture you need to do this over a period of time, weeks. Either as a regular job and save the output, or make a logon trigger that can do it. When catching logon information on intervals you are able to capture logon and logout timing as well.
  • A scheduled task / cron job that run on a regular basis, with the purpose of getting an overview of connected clients can look something like this (you need to create the table s_accounting first)
  • declare
    ls_day  varchar2(8);
    ls_time varchar2(5);

    cursor c_user is
      select username,osuser,machine,count(*) sessions
        from v$session
      where username <> 'SYS'
           and status <> 'KILLED'
           and type = 'USER'
     group by username,osuser,machine
     order by machine,username;
  • begin
       ls_day  := to_char(sysdate,'DDMMYYYY');
       ls_time := to_char(sysdate,'HH24:MI');
       for c_user_rec in c_user loop
          update s_accounting set s_logout=ls_time
           where s_day = ls_day and
                 s_username=c_user_rec.username and
                 s_osuser=c_user_rec.osuser and
                 s_machine=c_user_rec.machine;
          if SQL%NOTFOUND then
             insert into s_accounting (s_day,s_username,s_osuser,s_machine,s_login,s_logout)
                    values (ls_day, c_user_rec.username, c_user_rec.osuser,
                            c_user_rec.machine,ls_time, ls_time);
          end if;
       end loop;
    commit;
    exception
       when others then
          rollback;
          raise;
    end;

If you use a logon trigger instead, then you will have the actual user in variables and can make the code much more compact. Keep in mind that a logon trigger can give you an extra load on your system if you have a loaded web server in front. To get an idea of the number of logon/logoff activity on your system you can check the audit log.

  • select action_name,count(*) from dba_audit_session where timestamp>(sysdate-7) group by action_name;

Sometimes you need to go deeper. Lets say you need more details about the login, like name, email, phone etc. This kind of information, if it exist, will be stored in your application data model. Sometimes this is linked to the connections you see, other times not. Most applications will have tools to handle this kind of user information. If you do not know the application and really need to get hold of the user information stored you have to ask the vendor, or you can start looking for it yourself. As a start you could try out the query below. Check out the result and query the objects for user info. Somethimes you need to go wider. Look for extra attributes, like last logon, email, short name, anything you can use to qualify the user and possibly add some extra value. Finally you quite often see applications using a central user repository for authentication. The application system documentation might give you an idea or two as well. Good luck.
  • select 'object='||owner||'.'||object_name from dba_objects where owner not like 'SYS%' and object_type in ('TABLE','VIEW') and (object_name like '%USER%' or object_name like '%LOGIN%' or object_name like '%LOGON%') and owner in (select username from dba_users where account_status='OPEN') order by owner,object_name;

Tuesday, 27 February 2018

oracle dba account - connect through

The system user has the dba role granted. On most systems this user is regarded as an internal oracle admin user not used by administrators unless you have to. Administrators should have their own named users with all the necessary rights granted.

If you for some reason want to lend someone an admin user without giving them the password you can use this trick. Shown with user system, but it works for any user.

SQL> create user kpe identified by kpe;
User created.
SQL> alter user system grant connect through kpe;
User altered.
SQL> connect kpe[system]/kpe
Connected.
SQL> show user
USER is "SYSTEM"
SQL> alter user system revoke connect through kpe;
SQL> drop user kpe;

Friday, 16 February 2018

GDPR - example 1

The previous post will give you a quick intro to this topic.  http://kp-dba.blogspot.no/2018/02/gdpr-practical-approach.html

Please note that this is not a product. It is my own practical approach. It takes quite a bit to dig out all the plugins you need. What you see here is just examples and meant as inspiration.

There's also lots of commercial software that can help you out.

The most important part of this job is to locate all relevant information from underlying systems. The overview will never be better than the program that presents it. Obviously, you can add plugins crawling across several data sources, to find documents, signatures, etc., but you don't start there. As mentioned earlier, I have chosen to use MySQL. It has the functionality I'm looking for. In addition, it is free and can be installed on many platforms. What am I looking for?

  • Good support for character sets
  • Easy handling of data in / out of db via the file system
  • Generally good functionality and easily recognizable

Character sets are important. When you retrieve data from sub systems, the output encoding can vary. If possible, specify character set by retrieval. Keep in mind that the information will be loaded into a new data model and special characters must be represented correctly.

See below for two small examples of extracting data from subsystems. You might consider installing a cron job / scheduled task to refresh the lists on a regular basis.

List of inactive AD accounts:

File inactive-users.bat:
   powershell.exe -ExecutionPolicy ByPass -file inactive-users.ps1

File inactive-users.ps1:
import-module activedirectory
$domain = "your-domain"
$DaysInactive = 365
$time = (Get-Date).Adddays(-($DaysInactive))
# get AD Users with LastLogonTimestamp less than 365 days
Get-ADUser -Filter {LastLogonTimeStamp -lt $time } -Properties LastLogonTimeStamp,Name,Company,Department,SamAccountName |
# output to csv
select-object Name,Company,Department,SamAccountName,@{Name="Stamp"; Expression={[DateTime]::FromFileTime($_.lastLogonTimestamp).ToString('yyyy-MM-dd_hh:mm:ss')}} | export-csv inactive-users.csv -notypeinformation -encoding utf8

List of users from one of your applications:

ORAENV_ASK=NO
ORACLE_SID=your-db
ORACLE_HOME=your-oracle-home
. $ORACLE_HOME/bin/oraenv
sqlplus /nolog <<eol|grep "tag:"|awk -F "tag:" '{ print $2 }' > app-user-list.csv
  connect user/pw
  select 'tag:'||user-id||','||full_name||','||company||','||days_since_last_login from users;
  exit;
eol

You now have some nice comma separated files that represents a small selection of what you're up to. Now its time to load those files into MySQL. Be sure to set the character set and spot check your data. If you think you have missed out on fields containing useful information, then you must add this field and reproduce your csv file.

  • create table ad (Name varchar(60), Company varchar(70), Department varchar(40), SamAccountName varchar(40), Stamp varchar(30));
  • load data local infile 'inactive-users.csv' into table ad character set utf8 fields terminated by ',';
  • show warnings;
  • check the content: select * from ad;
  • same procedure for app

MySQL is, by default, case insensitive, but if your system is case sensitive, you can always use the lower and upper functions. The tables now represent the files you have extracted from your sub systems. Dependent on the where clause used in the plugin you now might find some weird rows,  mistakes from the production environment, mistypings, users intended for courses that was never held and so on. But as you have loaded logon data from several sub systems you should be able to connect them. Really, what you want to find is all digital traces for one specific person. You have to get familiar with the data and look for connections. If there are no connections you can make them. Create a table that link together sub systems. To find your entries you can make your queries wider. If you intend to reload the tables representing the sub systems you should be careful altering them. Create new tables or views to add the extra functionality you need. Here in this example I assume that the windows short name (samaccount) is the same as the one used in the app. If this is not the case you have to build a mapping table and join this to sub select with the correct id.

  • To list inactive users: select * from ad;
  • To list inactive users still registered in app: select * from ad where samaccountname in (select user-id from app);
  • You might want to add functionality to days_since_last_login

A simple php sample presenting inactive users still registered in your app could look something along the lines of:

<!DOCTYPE html>
<html>
<head>
<style>
table, th, td {
    border: 1px solid black;
}
</style>
</head>
<body>

<?php
set_include_path('dir ref to your php7 modules');
$servername = "localhost";
$username = "your-acc";
$password = "your-pw";
$dbname = "your-db";

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}

if (!$conn->set_charset("utf8")) {
    printf("Error loading character set utf8: %s\n", $mysqli->error);
    exit();
}

$sql = 'SELECT * from ad where SamAccountName in (select user-id from users) order by stamp';
$result = $conn->query($sql);

if ($result->num_rows > 0) {
    echo "<table><tr><th>Name</th><th>Company</th><th>Department</th><th>SamAccountName</th><th>LastLogin</th></tr>";
    // output data of each row
    while($row = $result->fetch_assoc()) {
        echo "<tr><td>" . $row["Name"]. "</td><td>" . $row["Company"] . "</td><td>" . $row["Department"] .  "</td><td>" . $row["SamAccountName"] . "</td><td>" . $row["Stamp"] . "</td></tr>";
    }
    echo "</table>";
} else {
    echo "0 results";
}

$conn->close();
?>

</body>
</html>

Wednesday, 14 February 2018

GDPR - a practical approach

GDPR is an EU regulation that will strengthen the control we as individuals will have over our own digital information. It will also limit the amount of information businesses can store about us. Most businesses need to adjust to be compliant with the regulation. I do realise that GDPR is much bigger than what is discussed here. As an IT person I will focus on the applications and personal information stored in those systems. What does this mean in real life? Well, first of all, lets starts with the obvious, formed as questions:

  • Do you have procedures for removing customers or former employees from all internal systems? IE mail, home areas, access to applications, etc.
  • Do you have obvious candidates in your systems for removal
  • Do you have applications that does not allow deletion of users / customers? If so, are there any procedures for making those users anonymous?
  • Can you find all digital traces from any given person registered in your systems?
  • Do you perform regular checks against your central repository to capture users who will be candidates for deletion?
  • Are your security policies compliant with the regulation?
  • Is it natural to send mail directly to the user, with information about what type of information you have stored. And maybe, if the user has been inactive for a long time, offer an unsubscription?
  • How good are your procedures? Will they survive an audit?

How many times have you logged in to a web shop and found that they have an impressive amount of information about you. It's nice to not key in all this information every time, but at the same time it's also part of a threat. For how long can they keep this information? Customer lists and passwords are sold and shared, and sometimes also abused.

What about the dentist or physio therapist? Maybe you want to switch to a new one? Are they able to transfer data about you to the new physio therapist? Do they clean up their own systems? Banks and insurance companies are better at this. They can easily move a client from one company to another. Whether they clean up afterwards, well, that's another story.

Either way, you need to get started. Maybe you want to buy stuff. Or, maybe you want to collect information to find a status, a starting point. What this is about is to make your own plugins and let the system tell you what it looks like. To do that you need someone who can extract information about users in all relevant systems. And maybe also look up the things you initially did not think of. This is typically someone in your IT department who can find the underlying systems and extract the user information you need. Once you have found the information you need, you must assemble this and load it into a separate data model. Here you can either just create queries and retrieve information directly. Or  make some simple reports. They can come in handy if you need some extra manpower to clean out all the old fun.

I have created a prototype with some examples of how to proceed. I chose to use Apache, MySQL and PHP on Linux, but that is not important here. Choose anything you think will do the trick. I will later show bits and pieces of what I have done. Then I recon you will get an idea of how you can do something similar. Please join in for a discussion.

Friday, 19 January 2018

oracle archive log history

Have you ever measured the amount of changes you have in your database. Sometimes it can be a good idea to know. Maybe you need to set up a standby database and you need details about the traffic between the two system. Or maybe you plan to move  the primary database and need to evaluate the different options. This will vary quite a bit from day to day, but one query that level this out on a monthly basis can be as shown below.

Multiply the count with the average size of your archive logs and you will have an idea.

Now may also be a good time to evaluate the number of log switches you have in your database. Every minute is too frequent. Every 10 minute to every hour is a more optimal configuration. Consider to resize the log files (ie create a new set of logfiles and disable the old ones) if you are far off.

select to_char(first_time, 'yyyy-mm'), count(*)
  from V$log_history
  group by to_char(first_time, 'yyyy-mm')
  order by 1;

Sunday, 14 January 2018

mongoDB daemon log file

Do you have a very large mongoDB daemon log file? 

You can install a log rotate package, or you can implement some dead simple bash script that will do more or less the same job. This is what this blog post  is about.

Make sure the script is adjusted for your installation before running it. If you have a very large daemon log file, many gb, you might want to just keep a portion of this for the first run. That's the idea of the variable MAX (nbo lines). Do some checks up front to find a good value for this parameter. Default here is set to 20000 lines. If you do a "tail -20000 mongod.log|more" you will get an idea of how many days back that will evaluate to. If you want to preserve the entire file you really should copy the file up front and store it on a filesystem that has capacity to hold it.

And for God's sake, make sure the variable LOGFILE actually points to the daemon logfile and not some file that is part of the database.

This script can easily be tailored to rotate any type of logs.

Make sure the os user has access rights to create and operate on the archive directory.

#/bin/bash
# by kp
# adjust the variable MAX to suit your needs
# it is ment for those of you with very large log
# install this in cron on a weekly basis
# 00 07 * * 0 bash /root/mongod_clean.bash > /root/mongod_clean.out
# consider the -- quiet switch in mongoDB to reduce the amount of log info
export LOGFILE=/var/log/mongodb/mongod.log
export ARCHIVE=/var/log/mongod_log_archive
export TODAY=`date +"%Y%m%d"`
export DAYS=30
export MAX=20000

if [ ! -d $ARCHIVE ]
then
   mkdir $ARCHIVE
fi

if [ -f $ARCHIVE/mongod.log.$TODAY ]
then
   echo "already cleaned today, rename copy from today under $ARCHIVE"
   exit 1
fi

if [ -f $LOGFILE ]
then
   lines=`wc -l $LOGFILE |awk '{ print $1 }'`
   if [ $lines -gt $MAX ]
   then
      tail -$MAX $LOGFILE > $ARCHIVE/mongod.log.$TODAY
   else
      cp -p $LOGFILE $ARCHIVE/mongod.log.$TODAY
   fi

   cat /dev/null > $LOGFILE
   find $ARCHIVE -type f -mtime +$DAYS -name 'mongod.log.*' |xargs rm -f

   echo "$LOGFILE truncated ok"
   exit 0
else
   echo "$LOGFILE missing"
   exit 1
fi

Friday, 12 January 2018

create rman logs when using oem

Have you ever thought, in oem, where are the backup logs? Interested? Read on.
 
When using rman from oem all your logs will be preserved inside the rman catalog database. And oem will remove old logs according to it's policy. This is just an example how to pull out the logs from oem and keep them on the filesystem. Also grepping for errors and missing backups.

Normally you would add some extra stuff to cron and send off a daily mail to sys admin and possibly  only preserve a number of days with logs on your main directory. If you have many databases it will add up quite a bit. I can add this to a later blog post.

The trick with adding a constant to your column list, and later grep for the constant can be used in many scripts.

First we need a place to store the logs.
mkdir /var/log/rman_logs

Please modify according to your own environment.

 #/bin/bash
# by kp, to be run as the ora owner on the ora server
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4
export ORACLE_SID=RMAN
export HOST=`hostname`
export OUTPUT=/var/log/rman_logs
export TODAY=`date +"%Y%m%d"`
export MISSING=$OUTPUT/missing-$TODAY.lst
export ERRORS=$OUTPUT/errors-$TODAY.lst
export STATUS=0

-- add a where clause to the select if you want to omit any of the dbs
$ORACLE_HOME/bin/sqlplus /nolog <<eol|grep db:|cut -d ':' -f2 > $OUTPUT/rman_db.list
 connect / as sysdba
 select distinct 'db:'||name from rman.rc_database;
 exit
eol

for i in `cat $OUTPUT/rman_db.list`;do
  $ORACLE_HOME/bin/sqlplus /nolog <<eol > /dev/null
    connect / as sysdba
    column tm new_value file_time noprint;
    select to_char(sysdate,'YYYYMMDD') tm from dual;
    spool $OUTPUT/$i-&file_time
    select output from rman.rc_rman_output where session_key in
        (select session_key from rman.rc_rman_backup_job_details
            where db_name='$i' and
                       start_time > sysdate-1 and
                       input_type <> 'ARCHIVELOG')
             order by recid;
    exit
eol
done

echo "---------------------------------------------"
echo "this status is sent from cron on $HOST, ora user"
echo ""
echo "output under $OUTPUT"
echo ""
echo "check rman_status_$TODAY.lst"

grep -l "no rows" $OUTPUT/*-$TODAY.lst > $MISSING
STATUS=$?

if [ "$STATUS" -eq 0 ]
then
  echo ""
  echo "rman backup missing for some databases"
  echo ""
  cat   $MISSING
  echo ""
  echo "consider a manual backup for those missing"
else
  rm -f $MISSING
fi

egrep -i 'error|rman-' $OUTPUT/*-$TODAY.lst > $ERRORS
STATUS=$?

if [ "$STATUS" -eq 0 ]
then
  echo ""
  echo "rman errors detected"
  echo ""
  echo "check $ERRORS"
  STATUS=1
else
  rm -f $ERRORS
  STATUS=0
fi
echo ""
echo "---------------------------------------------"
exit $STATUS


Thursday, 11 January 2018

oem agent

This assumes you already have an OEM server running at your site. You don't need to download the client agent package cause you already have it embedded into your oem server. Here is what you need to do:


  • pull out the software package from your OEM server.
    • emcli get_agentimage
  • unzip the image
  • edit respons file with your settings
  • run agentDeploy.sh
  • define the registration pw on OEM server
  • use proc mgmt_admin.cleanup_agent to remove the agent

Wednesday, 10 January 2018

dba

As a start of the new year I figured now is the time to share some Linux and Oracle stuff for daily task operations. I work as a professional dba, and has done for the last 20 years or more. Mostly Oracle, Linux and bash scripting, but also a lot of other weird stuff. If I think someone else can use my stuff I will share it here.

Lets see if I can keep this blog alive along with my photo blog :)