Knowledge increases exponentially. Today, you probably own more books than great universities of times past—Cambridge University owned less than two hundred books in the fifteenth century. First came the invention of writing, then alphabets, then paper, then the printing press, then mechanization. Each step caused an exponential increase in the collective human knowledge. In our generation, Al Gore invented the internet and the last barriers to the spread of knowledge have been broken. Today, everybody has the ability to contribute, communicate, and collaborate. We are all caught up in a tsunami, an avalanche, a conflagration, a veritable explosion of knowledge for the betterment of humankind. This is the blog of the good folks at Database Specialists, a brave band of Oracle database administrators from the great state of California. We bid you greeting, traveler. We hope you find something of value on these pages and we wish you good fortune in your journey.

troubleshooting “Enq: DX-contention” and “inactive transaction branch” wait events

I recently ran into an interesting problem that I thought I would share. A customer database was experiencing extremely high waits on “Enq: DX-contention” and “inactive transaction branch”. 

This script will list the Top 9 wait events over the past hour:

select * from (
SELECT
h.event “Wait Event”,
SUM(h.wait_time + h.time_waited)/1000000 “Total Wait Time”
FROM
v$active_session_history h,
v$event_name e
WHERE
h.sample_time < (select max(sample_time) from v$active_session_history)
and h.sample_time > (select max(sample_time)-1/24 from v$active_session_history)
AND h.event_id = e.event_id
AND e.wait_class <>’Idle’
GROUP BY h.event
ORDER BY 2 DESC)
where rownum <10;

 

Wait Event Total Wait Time
enq: DX - contention 3015.42557
inactive transaction branch 494.969554
SQL*Net more data from dblink 9.124002
log file parallel write 6.774227
db file sequential read 5.770977
db file scattered read 5.346139
SQL*Net more data to client 2.406811
log buffer space 2.109626
control file parallel write 1.489309

As you can see, these two events are orders of magnitude greater than any of the others.  I ran this query a few times in a row and each time, the “Total Wait Time” value increased for the top two events. This meant that there was at least one process currently waiting. So my first thought was to check which SQL statements are responsible for such huge waits. I ran this script which looks over the past hour for the SQL statements that experienced these specific wait events.

SELECT
a.sql_text,
SUM(h.wait_time + h.time_waited)/1000000 “Total Wait Time”
FROM
v$active_session_history h,
v$event_name e,
v$sqlarea a
WHERE
h.sample_time < (select max(sample_time) from v$active_session_history)
and h.sample_time > (select max(sample_time)-1/24 from v$active_session_history)
AND h.event_id = e.event_id
and h.sql_id=a.sql_id
AND e.wait_class <> ’Idle’
and event in (’enq: DX - contention’,'inactive transaction branch’)
group by a.sql_text
ORDER BY 2 asc;

Surprisingly, this yielded 0 rows! This was confusing. so I looked at it by session, rather than by SQL:

SELECT
h.session_id,
SUM(h.wait_time + h.time_waited)/1000000 “Total Wait Time”
FROM
v$active_session_history h,
v$event_name e
WHERE
h.sample_time < (select max(sample_time) from v$active_session_history)
and h.sample_time > (select max(sample_time)-1/24 from v$active_session_history)
AND h.event_id = e.event_id
AND e.wait_class <> ’Idle’
and event in (’enq: DX - contention’,'inactive transaction branch’)
group by h.session_id
ORDER BY 2 asc;

 

SESSION_ID Total Wait Time
184 1169.34
199 1160.33
174 1178.33

This returned 3 rows. Another query showed some more details:

select prev_sql_id, sql_id, blocking_session
from v$session where sid in (184,199,174)

 

PREV_SQL_ID SQL_ID BLOCKING_SESSION
    348
    403
    234

This showed that both prev_sql_id and sql_id were blank (no wonder the join to v$sqlarea failed) and that they were all being blocked by other sessions. A little more investigation revealed that the blocking sessions were long-running data extracts which were themselves waiting on “SQL*Net more data to client”.  At this point, I decided to check Oracle Support for any bugs that we may be hitting and sure enough, I found one that matched pretty closely to what I was seeing.  Bug 8552942 which causes processes to spin indefinitely waiting on “enq-DX Contention” and “inactive transaction branch”  is fixed in version 11.2.  Unfortunately, this database is version 10.2.0.4 so there is no fix currently available.

2 comments to troubleshooting “Enq: DX-contention” and “inactive transaction branch” wait events

  • Thanks a lot for the article. I was facing the same issue and the article helped me in analyzing the issue.
    I wanted to update the article that the patch for this bug on 10.2.0.4 is available for HP-UX and request can be made for other platforms. The patch 8552942 is available on 10.2.0.5 and 11.1.0.7.

  • Andrew

    Hey Mike,

    Ver: 11.2
    OS: Sol 10

    Thanks for the post. I amazingly saw the same issue on one of our databases that exhibit the same sort of behaviour, not in the same way but similar. Tracing one of the sessions waiting on these two waits, the trace reveals “sweet nothing”, literally shows the session running nothing, but shows it waiting on CPU and those waits. very interesting indeed.

    Distributed transactions aren’t possible on this particular database so this “enq: DX” must be coming from somewhere that isn’t a distributed transaction or a similar bug is in play. “Cursor: mutex S” is a big wait as well … and seem to somehow be inter-related.

    Very frustrating.
    Anyway, thanks again.

    I’ll keep you posted …

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>