Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D738E17D97 for ; Thu, 5 Mar 2015 06:11:57 +0000 (UTC) Received: (qmail 67693 invoked by uid 500); 5 Mar 2015 06:11:55 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 67651 invoked by uid 500); 5 Mar 2015 06:11:55 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 67641 invoked by uid 99); 5 Mar 2015 06:11:54 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Mar 2015 06:11:54 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW X-Spam-Check-By: apache.org Received-SPF: error (nike.apache.org: local policy) Received: from [209.85.215.48] (HELO mail-la0-f48.google.com) (209.85.215.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Mar 2015 06:11:28 +0000 Received: by lamq1 with SMTP id q1so26200109lam.0 for ; Wed, 04 Mar 2015 22:11:07 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=j84nyIqFSbBQm8U9yijubdMKap87Uiu41R7HsQAL6dI=; b=mS2YC52pceeaPrGdVlu9hqkyMgV/6uQ80mbRlKHNsT1pKTEZYnF4QRnxoyUk9Lp2DZ 8qbcVBhiHAZXkfl7RpebKQYdsJxzMMJtJKcBXA8r/YVxss2/vkBQVuvU+AdAWLA7QrY5 eBGI/4UE7wGLpWnf8WjgTMZElSZ/yW+1oNh/HpqZzleyxBZFmjn8HZ5EszMr5UkCz+W6 x+cSWmLPwY/ch/rLHERkXbsLn9QFuKV0zVOSmv9FEFxXt+nyl3Y/8aiE5Qj/UTayBY6+ X930JdVlWkkE8bRXaOIWBL7dgPPw+Wng+xq5uSNFV15aPJJLpYzlqwcrWsWueKVQ2LiW WGdA== X-Gm-Message-State: ALoCoQkTJ6XBd9LTL1MhyVZ1NnAX9TLb7rsmMhHu6G6UQVsdGkwdQImt1O/loDFUnyLjZLhjt2WB MIME-Version: 1.0 X-Received: by 10.152.204.42 with SMTP id kv10mr228472lac.52.1425535867192; Wed, 04 Mar 2015 22:11:07 -0800 (PST) Received: by 10.25.42.204 with HTTP; Wed, 4 Mar 2015 22:11:06 -0800 (PST) X-Originating-IP: [204.155.26.138] In-Reply-To: References: <39509593.2749242.1425489970417.JavaMail.yahoo@mail.yahoo.com> Date: Thu, 5 Mar 2015 00:11:06 -0600 Message-ID: Subject: Re: OOM and high SSTables count From: "J. Ryan Earl" To: user@cassandra.apache.org Cc: Jan Content-Type: multipart/alternative; boundary=001a1135fae2fa9c500510846ee3 X-Virus-Checked: Checked by ClamAV on apache.org --001a1135fae2fa9c500510846ee3 Content-Type: text/plain; charset=UTF-8 We think it is this bug: https://issues.apache.org/jira/browse/CASSANDRA-8860 We're rolling a patch to beta before rolling it into production. On Wed, Mar 4, 2015 at 4:12 PM, graham sanderson wrote: > We can confirm a problem on 2.1.3 (sadly our beta sstable state obviously > did not match our production ones in some critical way) > > We have about 20k sstables on each of 6 nodes right now; actually a quick > glance shows 15k of those are from OpsCenter, which may have something to > do with beta/production mismatch > > I will look into the open OOM JIRA issue against 2.1.3 - we may being > penalized for heavy use of JBOD (x7 per node) > > It also looks like 2.1.3 is leaking memory, though it eventually recovers > via GCInspector causing a complete memtable flush. > > On Mar 4, 2015, at 12:31 PM, daemeon reiydelle wrote: > > Are you finding a correlation between the shards on the OOM DC1 nodes and > the OOM DC2 nodes? Does your monitoring tool indicate that the DC1 nodes > are using significantly more CPU (and memory) than the nodes that are NOT > failing? I am leading you down the path to suspect that your sharding is > giving you hot spots. Also are you using vnodes? > > Patrick > >> >> On Wed, Mar 4, 2015 at 9:26 AM, Jan wrote: >> >>> HI Roni; >>> >>> You mentioned: >>> DC1 servers have 32GB of RAM and 10GB of HEAP. DC2 machines have 16GB of >>> RAM and 5GB HEAP. >>> >>> Best practices would be be to: >>> a) have a consistent type of node across both DC's. (CPUs, Memory, >>> Heap & Disk) >>> b) increase heap on DC2 servers to be 8GB for C* Heap >>> >>> The leveled compaction issue is not addressed by this. >>> hope this helps >>> >>> Jan/ >>> >>> >>> >>> >>> On Wednesday, March 4, 2015 8:41 AM, Roni Balthazar < >>> ronibalthazar@gmail.com> wrote: >>> >>> >>> Hi there, >>> >>> We are running C* 2.1.3 cluster with 2 DataCenters: DC1: 30 Servers / >>> DC2 - 10 Servers. >>> DC1 servers have 32GB of RAM and 10GB of HEAP. DC2 machines have 16GB >>> of RAM and 5GB HEAP. >>> DC1 nodes have about 1.4TB of data and DC2 nodes 2.3TB. >>> DC2 is used only for backup purposes. There are no reads on DC2. >>> Every writes and reads are on DC1 using LOCAL_ONE and the RF DC1: 2 and >>> DC2: 1. >>> All keyspaces have STCS (Average 20~30 SSTables count each table on >>> both DCs) except one that is using LCS (DC1: Avg 4K~7K SSTables / DC2: >>> Avg 3K~14K SSTables). >>> >>> Basically we are running into 2 problems: >>> >>> 1) High SSTables count on keyspace using LCS (This KS has 500GB~600GB >>> of data on each DC1 node). >>> 2) There are 2 servers on DC1 and 4 servers in DC2 that went down with >>> the OOM error message below: >>> >>> ERROR [SharedPool-Worker-111] 2015-03-04 05:03:26,394 >>> JVMStabilityInspector.java:94 - JVM state determined to be unstable. >>> Exiting forcefully due to: >>> java.lang.OutOfMemoryError: Java heap space >>> at >>> org.apache.cassandra.db.composites.CompoundSparseCellNameType.copyAndMakeWith(CompoundSparseCellNameType.java:186) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.composites.AbstractCompoundCellNameType$CompositeDeserializer.readNext(AbstractCompoundCellNameType.java:286) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.AtomDeserializer.readNext(AtomDeserializer.java:104) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:426) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:350) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:142) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:44) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) >>> ~[guava-16.0.jar:na] >>> at >>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) >>> ~[guava-16.0.jar:na] >>> at >>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:172) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:155) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:146) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:125) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:99) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) >>> ~[guava-16.0.jar:na] >>> at >>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) >>> ~[guava-16.0.jar:na] >>> at >>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:203) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:320) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1486) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2171) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>> ~[na:1.8.0_31] >>> at >>> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> at >>> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) >>> ~[apache-cassandra-2.1.3.jar:2.1.3] >>> >>> So I am asking how to debug this issue and what are the best practices >>> in this situation? >>> >>> Regards, >>> >>> Roni >>> >>> >>> >> > > --001a1135fae2fa9c500510846ee3 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
We think it is this bug:=C2=A0https://issues.apache.org/jira/browse/= CASSANDRA-8860

We're rolling a patch to beta bef= ore rolling it into production.

<= div class=3D"gmail_quote">On Wed, Mar 4, 2015 at 4:12 PM, graham sanderson = <= graham@vast.com> wrote:
We can confirm a problem on 2.1.3 (sadly o= ur beta sstable state obviously did not match our production ones in some c= ritical way)

We have about 20k sstables on each of 6 nod= es right now; actually a quick glance shows 15k of those are from OpsCenter= , which may have something to do with beta/production mismatch
I will look into the open OOM JIRA issue against 2.1.3 - we ma= y being penalized for heavy use of JBOD (x7 per node)

<= div>It also looks like 2.1.3 is leaking memory, though it eventually recove= rs via GCInspector causing a complete memtable flush.

On Mar 4, 2015, a= t 12:31 PM, daemeon reiydelle <daemeonr@gmail.com> wrote:

Are you finding a correlation between the shards o= n the OOM DC1 nodes and the OOM DC2 nodes? Does your monitoring tool indica= te that the DC1 nodes are using significantly more CPU (and memory) than th= e nodes that are NOT failing? I am leading you down the path to suspect tha= t your sharding is giving you hot spots. Also are you using vnodes?

<= div>
Patrick
=

On Wed, Mar 4, 2015 at 9= :26 AM, Jan <cnet62@yahoo.com> wrote:
HI Roni;=C2=A0

You m= entioned:=C2=A0
DC1 servers have 32GB of RAM= and 10GB of HEAP. DC2 machines have 16GB=C2=A0of RAM and 5GB HEAP= .

Best practices would be be to:
= a= ) =C2=A0have a consistent type of node across both DC's. =C2=A0(CPUs, M= emory, Heap & Disk)
b) =C2=A0inc= rease heap on DC2 servers to be =C2=A08GB for C* Heap=C2=A0

The leveled co= mpaction issue is not addressed by this.=C2=A0
hope this helps

Jan/




On Wedn= esday, March 4, 2015 8:41 AM, Roni Balthazar <ronibalthazar@gmail.com> wrote:


Hi there,

We are running C* 2.1.3 c= luster with 2 DataCenters: DC1: 30 Servers /
DC2 - 10 Servers.
DC1 se= rvers have 32GB of RAM and 10GB of HEAP. DC2 machines have 16GB
of RAM a= nd 5GB HEAP.
DC1 nodes have about 1.4TB of data and DC2 nodes 2.3TB.
= DC2 is used only for backup purposes. There are no reads on DC2.
Every w= rites and reads are on DC1 using LOCAL_ONE and the RF DC1: 2 and DC2: 1.All keyspaces have STCS (Average 20~30 SSTables count each table on
bot= h DCs) except one that is using LCS (DC1: Avg 4K~7K SSTables / DC2:
Avg = 3K~14K SSTables).

Basically we are running into 2 problems:

1= ) High SSTables count on keyspace using LCS (This KS has 500GB~600GB
of = data on each DC1 node).
2) There are 2 servers on DC1 and 4 servers in D= C2 that went down with
the OOM error message below:

ERROR [Shared= Pool-Worker-111] 2015-03-04 05:03:26,394
JVMStabilityInspector.java:94 -= JVM state determined to be unstable.
Exiting forcefully due to:
java= .lang.OutOfMemoryError: Java heap space
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at o= rg.apache.cassandra.db.composites.CompoundSparseCellNameType.copyAndMakeWit= h(CompoundSparseCellNameType.java:186)
~[apache-cassandra-2.1.3.jar:2.1.= 3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.composites.Abs= tractCompoundCellNameType$CompositeDeserializer.readNext(AbstractCompoundCe= llNameType.java:286)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.AtomDeserializer.readNext(Atom= Deserializer.java:104)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.columniterator.IndexedSliceRea= der$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:426)
~[apac= he-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.= cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMor= eData(IndexedSliceReader.java:350)
~[apache-cassandra-2.1.3.jar:2.1.3]=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.columniterator.Ind= exedSliceReader.computeNext(IndexedSliceReader.java:142)
~[apache-cassan= dra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra= .db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:4= 4)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at= com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterat= or.java:143)
~[guava-16.0.jar:na]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at com.= google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)~[guava-16.0.jar:na]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassand= ra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java= :82)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:17= 2)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at= org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:155)=
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at o= rg.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.jav= a:146)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIter= ator.java:125)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNex= t(MergeIterator.java:99)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 at com.google.common.collect.AbstractIterator.tryToCom= puteNext(AbstractIterator.java:143)
~[guava-16.0.jar:na]
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at com.google.common.collect.AbstractIterator.hasNext(Abs= tractIterator.java:138)
~[guava-16.0.jar:na]
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumn= s(SliceQueryFilter.java:203)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.filter.QueryFilter.coll= ateColumns(QueryFilter.java:107)
~[apache-cassandra-2.1.3.jar:2.1.3]
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.filter.QueryFilter.c= ollateOnDiskAtom(QueryFilter.java:81)
~[apache-cassandra-2.1.3.jar:2.1.3= ]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.filter.QueryFil= ter.collateOnDiskAtom(QueryFilter.java:69)
~[apache-cassandra-2.1.3.jar:= 2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.db.CollationC= ontroller.collectAllData(CollationController.java:320)
~[apache-cassandr= a-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.d= b.CollationController.getTopLevelColumns(CollationController.java:62)
~[= apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apa= che.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.jav= a:1915)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFami= lyStore.java:1748)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342)=
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at o= rg.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.jav= a:57)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0= at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow= (StorageProxy.java:1486)
~[apache-cassandra-2.1.3.jar:2.1.3]
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 at org.apache.cassandra.service.StorageProxy$Droppable= Runnable.run(StorageProxy.java:2171)
~[apache-cassandra-2.1.3.jar:2.1.3]=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.util.concurrent.Executors$RunnableA= dapter.call(Executors.java:511)
~[na:1.8.0_31]
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorServi= ce$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
~[apache= -cassandra-2.1.3.jar:2.1.3]
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.ca= ssandra.concurrent.SEPWorker.run(SEPWorker.java:105)
~[apache-cassandra-= 2.1.3.jar:2.1.3]

So I am asking how to debug this issue and what are= the best practices
in this situation?

Regards,

Roni





--001a1135fae2fa9c500510846ee3--