Users, Roles and Target Groups

New additions to patch set 1 are now completed:

  • User and Role Management

First off, this is entirely optional. If you have been using the zztat_admin account to manage zztat via its user interface, you can continue to do so without having to change anything.

If you however so choose, you can now create your own users and roles to control who can access which features – and which databases – in your zztat environment.

With this in place, you can now restrict your developers access to only the development databases, while your production DBAs or database engineers have access to all the databases.

  • Target Database Groups

To simply management in larger environments, target databases can now be grouped together. This enables access to be controlled to the various databases in a more streamlined manner.

Access to databases will be honored globally in the zztat environment, incl. the zztat mobile application!

The patch set 1 is nearing completion – stay tuned for more cool announcements of the upcoming features, including automated 10046 tracing and trace file collection!


Slack & Telegram Integration!!

Hello everyone

It’s been a while – times have been busy! I am pleased to announce that the chat API integration has been completed and zztat can now natively interact with both Slack and Telegram, to send alerts directly to your favorite team’s chat channel!!

How does it work?

The way it has been implemented is that sending amessage to a chat channel via the respective API is just another reaction in zztat. For instance, by default zztat comes with the STANDARD_ALERTING reaction chain, which includes the SEND_EMAIL reaction. All you need to do is to add the SEND_CHAT reaction to it. And every alert will also go to the configured chat API.

Due  to the way chat APIs work, a little bit of extra effort is required which naturally differs between the different chat APIs.

The zztat UI has been extended and has had wizards added that help guide you through the process from start to finish for both Slack and Telegram.

For example, the Telegram API requires just a token once you have created a bot:

Once you’ve completed the setup wizard and have added the SEND_CHAT reaction to a gauge or to a reaction chain, you can immediately start receiving notifications such as this:

We’re super excited about this feature – coming with zztat patchset 1!

Until next time




zztat’s BASE64 for LOBs

As we made progress developing our mobile app, we realized that we’d want to encode certain things to pass them along with the payload safely across various protocols (XML, JSON, etc).

Now, Oracle provides BASE64 encoding and decoding natively, but unfortunately only for short string data (in RAW form). There’s no overload for BLOB data.

At first glance, this seems to be a simple requirement – just write a loop around and call utl_encode.base64_encode / decode, right? Wrong.

The problem has been discussed on various forums incl asktom, but there isn’t a clean solution anywhere with actual working code, that works for all scenarios and string lengths. So here’s how we did it.

A couple notes:

  • Oracle’s utl_encode.base64_encode adds CRLF into the output string. I consider that a bug, since those characters have no place there. They’re not part of the valid values according to the RFC. This is particularly hard to “spot” because the output string is returned as a RAW. So you’d have to really take it apart to notice it (or fail when decoding it).
  • BASE64 is encoded in groups of 6 bits per character as per Thus we should process the strings in multiples of 6 and 8. For example, 648  is both a multiple of 6 and 8. 6, to avoid scrambled output, and 8 to avoid scrambled input to the decoder.
  • The encode chunk size should be a multiple of 24bits (3 bytes). That way we avoid padding in the middle of our encoded string when concatenating the individual encoded chunks.
  • We use CLOB as the primary input for our function, since we can’t really use raw binary data (BLOB) when working with XML and JSON. The whole point of using BASE64 is to eliminate that and represent any data as a form of “plain text”.

Let’s take a closer look at the output of Oracle’s base64 encoder:

SQL> select utl_raw.cast_to_varchar2(utl_encode.base64_encode(utl_raw.cast_to_raw(lpad('A', 100, 'A')))) from dual;


Looks fine, doesn’t it?

Well no, it actually doesn’t, but it’s hard to tell from that output. It becomes more clear if we look at the data this way:

SQL> select dump(utl_raw.cast_to_varchar2(utl_encode.base64_encode(utl_raw.cast_to_raw(lpad('A', 100, 'A'))))) from dual;

Typ=1 Len=140: 81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85,70,66,81,85

Notice the 13,10 there? This is data injected by the encoder. That data should not be there.  There is no mention of added newline characters in RFC 4648.

Without further ado, here’s an encoder that takes a CLOB as input and generates BASE64

function to_base64 (data in clob) return clob deterministic
l_buf raw(4000);
l_pos number := 1;
l_amount number := 480;
l_bin blob;
l_out clob;
l_lang_ctx int := sys.dbms_lob.default_lang_ctx;
l_warn int;
l_len number;
l_off1 int := 1;
l_off2 int := 1;

sys.dbms_lob.createtemporary(l_bin, true);
sys.dbms_lob.converttoblob(l_bin, data, sys.dbms_lob.lobmaxsize, l_off1, l_off2, sys.dbms_lob.default_csid, l_lang_ctx, l_warn);

l_len := sys.dbms_lob.getlength(l_bin);

while (l_pos < l_len) loop, l_amount, l_pos, l_buf);
l_out := l_out || replace(replace(sys.utl_raw.cast_to_varchar2(sys.utl_encode.base64_encode(l_buf)), chr(10), null), chr(13), null);
l_pos := l_pos + l_amount;
end loop;

return l_out;


Note that:

  • We’re stripping the carriage return and newline characters right off the bat before we return the result.
  • The data is processed in chunks of 480 bytes. This ensures we get no padding in the middle of the returned LOB.

And here’s the decoder:

function from_base64 (data in clob) return clob deterministic
l_bufr raw(4000);
l_pos number := 1;
l_amount number;
l_len number;
l_data clob;
l_raw blob;
l_rawout blob;
l_lang_ctx int := sys.dbms_lob.default_lang_ctx;
l_warn int;
l_out clob;
l_off1 int := 1;
l_off2 int := 1;

l_data := replace(replace(data, chr(10), null), chr(13), null);

l_len := dbms_lob.getlength(l_data);

sys.dbms_lob.createtemporary(l_raw, true);
sys.dbms_lob.converttoblob(l_raw, l_data, sys.dbms_lob.lobmaxsize, l_off1, l_off2, sys.dbms_lob.default_csid, l_lang_ctx, l_warn);

l_len := sys.dbms_lob.getlength(l_raw);
l_amount := 648;
l_pos := 1;

sys.dbms_lob.createtemporary(l_rawout, true);

while (l_pos < l_len) loop, l_amount, l_pos, l_bufr);
l_bufr := sys.utl_encode.base64_decode(l_bufr);

sys.dbms_lob.writeappend(l_rawout, sys.utl_raw.length(l_bufr), l_bufr);
l_pos := l_pos + l_amount;
end loop;

sys.dbms_lob.createtemporary(l_out, true);
l_off1 := 1;
l_off2 := 1;
sys.dbms_lob.converttoclob(l_out, l_rawout, sys.dbms_lob.lobmaxsize, l_off1, l_off2, sys.dbms_lob.default_csid, l_lang_ctx, l_warn);

return l_out;


Also a few notes:

  • We strip off the CRLF once more before decoding. That’s just a fail safe, in case we’re passed data directly from Oracle’s utl_encode.base64_encode, instead of our own to_base64 function.
  • We again process chunks of data in multiples of both 6 and 8 to avoid any complications and issues with chopping BASE64 bit groups somewhere in the middle.
  • You may gain some performance benefits by using larger values, but this performs sensibly well and has been tested with fairly large input strings.

It’s also worth noting that this should work with pretty much any input string – even with Thai characters:

SQL> select zz$util.from_base64(zz$util.to_base64('สวัสดีครับ')) from dual;



zztat’s implementation of BASE32

As we were working on integrating two-factor authentication via Google’s Authenticator app for our upcoming zztat mobile interface, we were faced with the requirement to be able to encode and decode data into BASE32.

Most of you will be aware that BASE64 is natively supported by Oracle, via the utl_encode package:

SQL&amp;amp;gt; select utl_encode.base64_encode(utl_raw.cast_to_raw('zztat rocks!')) 
    from dual;


And the reverse is also simply done via:

select utl_raw.cast_to_varchar2(utl_encode.base64_decode(hextoraw('656E703059585167636D396A61334D68'))) as plain

from dual;

zztat rocks!

But unfortunately, BASE32 is not as common, and things get a bit more involved.

The original definition of these encoders is in Request For Comment (RFC) 4648: “The Base16, Base32, and Base64 Data Encodings”.

A quick google search to see if any of the publicly available libraries already include BASE32 yielded nothing. Similarly, what is found on forums and other places online isn’t really production ready code at all. Some definitions entirely omit the padding that’s specified in the RFC, others can only handle fixed length strings. That wasn’t good enough. We decided to write our own and share it with you all.

Below is zztat’s implementation of BASE32. This is a part of the Google Authenticator support code and was found to work perfectly fine with it.

Before we can dive into actually converting anything to BASE32, we first need to be able to convert into binary. And I don’t mean just Oracle’s RAW data type; but actual binary as a string representation, consisting of 1’s and 0’s.

The following helper function does just that:

function num_to_bin (decimal in number, bits in number default 8) return varchar2 deterministic
l_decimal number := decimal;
l_binary varchar2(64);
while (l_decimal &amp;amp;gt; 0) loop
l_binary := mod(l_decimal, 2) || l_binary;
l_decimal := trunc(l_decimal / 2);
end loop;

return lpad('0', nullif(bits - mod(length(nvl(l_binary,0)), bits), bits), '0') || nvl(l_binary, 0);


There’s a key aspect of this function that we need to take a closer look at:  the bits can vary.

The reason for this is that we need to be able to correctly convert regular bytes (consisting of 8 bits) to a binary representation as well as 5-bit values, which are used in the BASE32 algorithm.

What that function does is easily demonstrated:

select zz$util.num_to_bin(4) as binary from dual;


select zz$util.num_to_bin(257) as binary from dual;


And as it will be used when dealing with BASE32:

select zz$util.num_to_bin(24, 5) as binary_5 from dual


Why is this important?

The BASE32 encoding works using the following alphabet (from RFC 4648 ):

0 A   9 J 18 S 27 3
1 B   10 K   19 T   28 4
2 C   11 L   20 U   29 5
3 D   12 M   21 V   30 6
4 E   13 N   22 W   31 7
5 F   14 O   23 X
6 G   15 P   24 Y   (pad) =
7 H   16 Q   25 Z
8 I   17 R   26 2

The encoding table consists of a total of 32 characters, numbered from 0 to 32. We can represent that using just 5 bits:

00001 = 1

00010 = 2

00100 = 4

01000 = 8

10000 = 16

Thus we can represent the range from 0 (00000) to 31 (11111) using those 5 bits. This is an essential part of how BASE32 works.

The first part is the encoder:

function to_base32 (string in varchar2) return varchar2 deterministic
  l_string varchar2(100) := string;
  l_b32_map char(32)     := 'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567';
  l_char_ascii int;
  l_char_bin char(8);
  l_string_bin varchar2(4000);
  l_b32_string varchar2(4000);
  l_padding varchar2(10);

  for c in 1..length(l_string)
    l_char_ascii := ascii(substr(l_string, c));
    l_char_bin   := num_to_bin(l_char_ascii);
    l_string_bin := l_string_bin || l_char_bin;
  end loop;

  for b in
    select rownum rn1, 
           rpad(bits, ceil(len/5)*5, '0') as bits, 
           decode(len, 8, '======', 16, '====', 24, '===', 32, '=') as padding, 
      select regexp_substr(l_string_bin, '[01]{1,40}', 1, level) as bits, 
             length(regexp_substr(l_string_bin, '[01]{1,40}', 1, level)) as len
        from dual
     connect by level <= regexp_count(l_string_bin, '[01]{1,40}') + 1
   where bits is not null
    l_padding := b.padding;
    for b5 in
      select bits, 
             bin_to_num(substr(bits, 1, 1), nvl(substr(bits, 2, 1), 0), nvl(substr(bits, 3, 1), 0), nvl(substr(bits, 4, 1), 0), nvl(substr(bits, 5, 1), 0)) as chr
        select regexp_substr(b.bits, '[01]{1,5}', 1, level) as bits
          from dual
       connect by level <= regexp_count(b.bits, '[01]{1,5}') + 1
     where bits is not null
      l_b32_string := l_b32_string || substr(l_b32_map, b5.chr + 1, 1);
    end loop;

  end loop;
  return l_b32_string || l_padding;

And here’s the reverse:

function from_base32 (string in varchar2) return varchar2 deterministic
  l_string varchar2(100) := string;
  l_b32_map char(32)     := 'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567';
  l_char_ascii int;
  l_char_bin char(5);
  l_map_pos number;
  l_string_bin varchar2(4000);
  l_plain_string varchar2(4000);
  l_padding varchar2(10);
  l_pad_length number;
  l_trim number;

  if (instr(l_string, '=') > 0) then
    l_padding    := substr(l_string, instr(l_string, '='));
    l_pad_length := length(l_padding);
    l_string     := substr(l_string, 1, instr(l_string, '=') - 1);
  end if;

  for c in 1..length(l_string)
    l_map_pos    := instr(l_b32_map, substr(l_string, c, 1)) - 1;
    l_char_bin   := num_to_bin(l_map_pos, 5);
    l_string_bin := l_string_bin || l_char_bin;
  end loop;

  l_trim := mod(length(l_string_bin), 8);
  l_string_bin := substr(l_string_bin, 1, length(l_string_bin) - l_trim);

  for b8 in
    select bits, bin_to_num(substr(bits, 1, 1), nvl(substr(bits, 2, 1), 0), nvl(substr(bits, 3, 1), 0), nvl(substr(bits, 4, 1), 0),
           substr(bits, 5, 1), nvl(substr(bits, 6, 1), 0), nvl(substr(bits, 7, 1), 0), nvl(substr(bits, 8, 1), 0) ) as chr
      select regexp_substr(l_string_bin, '[01]{1,8}', 1, level) as bits
        from dual
     connect by level <= regexp_count(l_string_bin, '[01]{1,8}') + 1
   where bits is not null
    l_plain_string := l_plain_string || chr(b8.chr);
  end loop;

  return l_plain_string;

First and foremost, this is not an implementation to use if you want to encode gigabytes worth of data as BASE32. It won’t perform. But for a simple use such as authentication and token generation (where the function is called exactly once throughout the user’s session) it’s perfectly fine.

For larger amounts of data what you would likely want to do is either substitute the queries using regexp_substr() to split the string into pure PL/SQL to avoid the context switches – or if you’re particularly adventurous, use plain SQL for the entire thing.

A couple explanations on the code:

  • The first thing you need to do is split it up into groups of 40 bits, both when encoding and decoding.
  • Once you know the length of the remainder (which can be one of 8 bits, 16 bits, 24 bits or none) you know the padding you need to deal with.
  • Oracle’s bin_to_num function was new to me. But it suits our needs perfectly here as it takes a bit mask as input and converts it to a decimal number.
  • You can see that we’re feeding bin_to_num 5 bits when encoding to base32 (since we only need 5 bits to represent values 0-31) but 8 bits when decoding back to decimal.

And that’s it! BASE32 in PL/SQL.




Less Emails!

By default, the zztat installation will rely on the ability to send out emails. This means that each target database which is managed by the framework has been sending out emails on its own.

This lead to the additional effort of having to configure the database servers to be able to send emails. This is both labor intensive and may also be a security concern in some environments.

Introducing a new reaction: SEND_EMAIL_REPO. This reaction makes use of the framework’s core strengths and allows the repository to send out emails on behalf of the target databases.

How does it work?

Well, it couldn’t be more simple. Any gauge that currently uses the SEND_EMAIL reaction, you can simply swap with the new SEND_EMAIL_REPO reaction. And that’s that.

Email send requests are queued up locally on the target databases and are transmitted to the repository, where the emails are then processed and sent out. And it all happens within seconds. The delay introduced by going through the repository versus the target sending it on its own is minimal.

This new reaction will greatly simplify the effort to deploy zztat in larger environments.

With January approaching fast, so is zztat’s production release. We are going to launch no later than January 15th, and will be launching with a global scope right off the bat. Partnership contracts are signed, distribution chains are being established, and the infrastructure needed is being built as I am writing this.

Are you interested in participating? Ping me at stefan (at) and I can put you in touch with the right folks to get you on board.

Happy New Year to you all and may you rest well during your nights in 2018; thanks to zztat’s proactive power!



Patch me up, Scotty!

Patches are a necessary evil whenever you’re developing software. Since zztat is shipped as a 100% SQL & PL/SQL software, we have the luxury of providing far better usability to patch zztat compared to some other software.

The zztat framework now comes with a full patching mechanism. And it’s pretty powerful!

Let’s assume that you have a zztat repository, and 20-some target databases which are monitored and reporting in to that repository. What if you need to apply a zztat patch?

Well, you’ll be glad to know that all you gotta do, is load the patch into the repository. The framework can then automatically distribute the patch to all the target databases, and apply it there automatically!

The mechanisms used for patching make full use of the core framework components, which enables us to:

  • Deliver patch sets (a.k.a. bundle patches)
  • Deliver one-off patches or hot-fixes

… and allow you to apply them with a single command. On the entire environment.

The patching component is smart enough to understand:

  • Online vs Offline Patches

Some patches will require the framework to be stopped to apply the patch, and restarted thereafter. This will happen automatically.

  • Patch Dependencies

Some patches may require other patches to install correctly. An example would be a hot-fix produced for a patch-set. We want to take as much of the simple tasks off your hands, so if we need to apply a prerequisite patch and it is available, we’ll apply it automatically. The same goes for patches required to be rolled back before a new patch can be applied.

  • Obsolete Patches

Patches may render previous patches obsolete. This can happen when a new patch is released which combines two or more previous patches, or when a patch-set includes hot-fixes previously released. The patching component can handle that, too.

  • Automated vs Manual Patching

If, for some reason or another you wish to apply the patch manually on a few select databases instead of all of them, there is also a manual mode which gives you full control over the patching process.

The repository of course contains all the information about which patch is ready to apply, applied or rolled back on which target database. And, naturally you will also be controlling all of these things from one single place: the repository database.

So how does it all work?

Say you have a patch that you need to apply to your zztat environment. You’d have to do the following:

  1. Download the patch onto the zztat repository server and extract the archive
  2. Start the patch install script

What the framework then will do is the following:

  1. Locate the patch inventory XML file which contains all the details the patching process needs to do (as well as the rollback steps).
  2. Load the patch data files into the zztat repository database.
  3. Patch the repository database.
  4. Create tasks for the target databases to pick up , and to tell them to download the patch from the repository.
  5. Once a target database has completed the download, it will inform the repository.
  6. The repository will create a new task for the patch to be applied.
  7. The target database will then apply the patch and once complete, inform the repository.
  8. Once all targets have checked in to the repository, the patch is marked as fully applied.
  9. If, at a later time a database comes out of blackout or is restarted and zztat finds an automatic patch install has been done in the meantime, it will automatically pick it up and apply the patch as well.

This is yet another awesome feature that lets zztat stand out against existing monitoring software. And there will be more to come!

In closing, we’d like to hear from you guys what you’d think:

Should zztat automatically download (but not apply) available patches from zztat’s servers? Would you want that feature?

Let us know your opinion in the comments!

Have a great day!




BETA Progress Update

Hi everyone!

The zztat beta is going strong, with lots of bug fixes and feature enhancements going in daily.

The third beta release will be the metric release and is expected to go out end of November, or at the latest end of next week, in the first couple of days of December.

Since the initial beta1, much has been enhanced and added. A short highlight reel is here:

  • All metadata is now refreshed automatically on all target databases whenever a change is done on the repository. This makes zztat fully centralized.
  • Metric data can now be automatically purged, with a configurable retention.
  • Copying gauges to create database-specific checks has been overhauled and is now more intuitive.
  • Metrics fired as a reaction (such as high-speed sampling for example) are now automatically updating the alert to indicate the snapshot data. This enables various reports to easily access the high-speed sampling data.
  • Oracle options can now be monitored by zztat to catch potential license issues with Oracle. Tables, Indexes, Lobs, Flashback Archives, and even RMAN configurations can now be checked.
  • Greatly enhanced memory usage monitoring that goes as deep as showing you which Oracle kernel function has allocated the memory. Comes with non-intrusive but less detailed variations as well as fully-detailed variations which probe the process in question. I’ll be posting more details about this in the near future!
  • dbms_system has been eliminated and its functionality is now integrated in zztat’s own sys_helper package.

There is one more feature that we’d like to specifically highlight, because it requires a  bit more of an elaborate explanation: Automatic Error Reports.

First of all, the feature is disabled by default and must be explicitly enabled in your environment. Once enabled, whenever an error is seen, zztat will send an email to the developer ( with the error details.

It will look something like this (click to enlarge):

The email has been specifically designed to:

  • Not include any personally identifiable information whatsoever
  • Not include any IP addresses, host names, etc
  • Give us a clear presentation of what happened that led to the error
  • Give us the full zztat error stack to see exactly what happened
  • Enable us to proactively correct issues found
  • Always send you an exact copy of the email we’ve received. It will be sent to the email address defined for “CRITICAL” alerts.

Our privacy policy has also been updated and can as always be found at

We’re still well on track for the production release come January and look forward to see zztat making your DBA lives so much easier!

Have a great week!





zztat: Beta Release Announcement

Hi all!

We’re excited to announce the availability of the zztat Beta-1 for every backer, before the end of this week!

What Will The First Beta Include?

  • 5 Default Metrics
  • 8 Internal Metrics
  • Non-intrusive generic Reactions
  • 3 Advanced Reactions, which are disabled by default
  • The zztat UI

What Do I Need To Get Started?

  • An Oracle database for the repository. This can be a small new instance, or a dedicated schema in an existing instance (2GB SGA + 10GB disk is plenty).
  • The repository is ideally with XMLDB and APEX 5.1 installed (but and 12.2 are supported as well).
  • A few GB of space for a tablespace on the repository and on each target
  • At least one target database to be monitored by zztat. Your beta trial license has no limit on the number of databases you can monitor. Supported versions include,, and

We will send you the software package pre-configured with the setup configuration file and all already prepared. It will be configured to install with the following options:

  1. XML DB is assumed to be present, sending emails will be enabled.
  2. The ZZ$SYS_HELPER package will be installed in the SYS user. This is the only object created outside of the zztat schemas (apart from the global application contexts which are also stored in SYS automatically regardless of who creates them).
  3. The installer will create two users on the repository: ZZ$REPO (repository schema owner) and ZZ$LINK (database link user where the targets connect to).
  4. The installer will create two users on the target databases: ZZ$USER (zztat monitoring user) and ZZ$LINK (private database link owner who connects to the repository).
  5. All default passwords will be set to “Change.123Me”.

You are of course free to change those settings in setup.sql before running the installer. And yes, you can even choose different names for the users, usernames are not hard-coded anywhere.

More Details

The following default metrics will be enabled out of the box in this beta release:

  • ASM Diskgroup monitoring (every 5 minutes)
  • Tablespace monitoring (every 5 minutes)
  • ASM Disk monitoring (every 5 minutes for offline / unavailable disks)
  • Session wait monitoring (every 5 minutes)
  • Top SQL statements (every 5 minutes)

Each of those metric comes with default gauges which are also by default enabled. They will all default to non-intrusive reactions, such as sending emails or writing to logs. You can view and change the thresholds for those gauges in the zztat UI.

Internal metrics which zztat uses and are enabled by default:

  • Extents (collected every 4 hours)
  • Audit actions (collected once)
  • Event names (collected once)
  • Latch names (collected once)
  • Metric names (collected once)
  • SQL commands (collected once)
  • Stat names (collected once)
  • Wait classes (collected once)

Reactions supplied with this release include:

  • Sending emails (requires XMLDB)
  • Writing to the database alert log
  • Adding datafiles automatically (disabled by default but can be enabled easily)
  • Hi-speed latch sampling (disabled by default but can be enabled easily)
  • Hi-speed mutex sampling (disabled by default but can be enabled easily)

The zztat graphical user interface, with the following functionality:

  • A draft overview Dashboard showing environment health and activity
  • Managing metric queries and schedules
  • Managing gauge queries and schedules
  • Managing gauge filter columns, adding new filter columns
  • Managing gauge ignore values
  • Overriding gauge filter columns
  • Creating and editing reaction chains
  • Managing reaction throttling
  • Configuration screen for many framework parameters
  • Built-in help and tips for every function

Thank you once again for all your support!

zztat UI: New Updates!

The UI is coming along well and includes lots of new functionality. It looks like we will be able to include it with basic functionality well ahead of schedule with the first beta release so that you all can check it out live in action!

The UI is designed with usability in mind and includes loads of tool-tips and help texts to guide the user through the application. Every form field has a help text, and every form has explanations added to it.

Here’s the database configuration screen, which controls the core behavior of the framework:

As any other zztat entity, configurations also follow the same model that there is a “Default” configuration, and you then have the option of overriding the default for a specific target database as seen in the screenshot, for the database O12102.




Help texts such as this one can be opened by clicking on the little question mark icon behind the form fields:

And here’s the notification settings screen of the database configuration:

The metric editor has also been added and initially will allow you to customize the frequency at which the metrics are executed, and change the metric query. For the final release, the editor will be further enhanced to allow even greater customization.

Those new modal dialogues added in APEX 5.1 really make the application flow feel a lot more natural. We’re making heavy use of them in the zztat UI. Here’s another quick screenie showing off the new gauge editor:

And finally, what most users will be needing is the gauge column editor, which allows you to customize the thresholds that will cause the alerts to trigger:

And all this is of course configurable at one central place – the zztat UI – and will be automatically applied where they are needed within the entire zztat environment!

Want to change the tablespace full threshold for a specific database only? No need to log on to that database server and fiddle with a configuration file.

Want to temporarily ignore a tablespace from monitoring and alerting? No need to log on, either. Just do it right there in the UI!

Oh, and one more thing needs to be pointed out. If there are any issues with the form data you entered, zztat will raise descriptive error messages telling you what the problem is:

Usability is the first priority. Naturally that also includes having user-friendly error messages, and not some cryptic ORA-00001: unique constraint (SYS_C000241) violated.

Stay tuned for more updates to come!




zztat: The UI is coming! And an announcement too!

Hi all

It’s been a busy, busy week. Many bugs have been squashed, troubles have been shot and many a lines of code have been written. The framework now sports just under 25’000 lines of code, by the way, with the largest chunk being in the internal job & processes package which comes in at just under 4500 lines.

The big news is that we have decided to make some changes to the planned licensing and as a result the zztat UI will be included with both the basic and the premium packages of zztat. So you’ll always get the GUI, regardless of the package you purchase.

Development on the UI has started on Friday, and it will be based on the latest version of Oracle’s Application Express (APEX) version 5.

Here’s a little sneak peek at what’s in store:

And the first draft of the metric screen:

Stay tuned for more to come!