Return-Path: X-Original-To: apmail-asterixdb-users-archive@minotaur.apache.org Delivered-To: apmail-asterixdb-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B563718F56 for ; Fri, 29 Apr 2016 13:40:10 +0000 (UTC) Received: (qmail 62080 invoked by uid 500); 29 Apr 2016 13:40:10 -0000 Delivered-To: apmail-asterixdb-users-archive@asterixdb.apache.org Received: (qmail 62054 invoked by uid 500); 29 Apr 2016 13:40:10 -0000 Mailing-List: contact users-help@asterixdb.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@asterixdb.incubator.apache.org Delivered-To: mailing list users@asterixdb.incubator.apache.org Received: (qmail 62043 invoked by uid 99); 29 Apr 2016 13:40:10 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 Apr 2016 13:40:10 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 8609BC7C48 for ; Fri, 29 Apr 2016 13:40:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.179 X-Spam-Level: ** X-Spam-Status: No, score=2.179 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_REPLY=1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id q8MYitt9lbJZ for ; Fri, 29 Apr 2016 13:40:07 +0000 (UTC) Received: from mail-oi0-f50.google.com (mail-oi0-f50.google.com [209.85.218.50]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTPS id 118695F65C for ; Fri, 29 Apr 2016 13:40:07 +0000 (UTC) Received: by mail-oi0-f50.google.com with SMTP id x19so118367389oix.2 for ; Fri, 29 Apr 2016 06:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=I7dZywmZaLWG1WP0jd7vofTfDQI1NYNm2sHvkrM92OE=; b=Ybv6MvkjcOig7Wdd8vH1UtQFSrgFV5o/9ywhjaFf3VuhMhwdGzsdHxyDZB4cbtG6rd 9dtzSq4PEfvzMPooHH8U4WP/GqXChXZeOTLWhge9wAzPQXT3hzvdJ5j21jDX4yv3ZlGX G0js85gcltNyMabbiPrunF095DrDjuMLRzpoteHlgdElFXuPlDHKuU68BGR75J81vM9H KzNyOVl+ssOmL77y4BRgD/vegW6oqmqoC8Np/iTYmprwQ2bfPHxgoAF4o0HfcQku+btx MndqWrrKpLSGWyyHFtY5znwsE7FNcDcaDf9gqpP1EK/fKeK3iLuVoivLqAFjIPS/xq1v XSVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=I7dZywmZaLWG1WP0jd7vofTfDQI1NYNm2sHvkrM92OE=; b=SXoneLihvhDtUgGEsNlsqrvWyK7TUuEsJw+lOIv9/p0ZNgWF8+6En9f7JtVs5zqc2g gHd61k/0wUGcPpzlahRsYBBpXz/zLLezChJMAHtTB+CcNVDjmmxDlAn06PSA586e2c4u Sb/0K/jXK3fbHsIMLGpEjNIVAVj4oxe3pXZCarS1g4QRBXg9hJDjnNv5zf+gx2+W1o78 JIochLcQnkr4LOxNU889MB9TLHAtDRGkP55u62ZssRxCXAoQjDTHPtcOjoFs4JHcne6L 0jdNZ54LTxapk9p7/wWqzOAIn3JLqD2xV+JMNhPvYu0erk0JJgo2QGQT43w9nRX06anR 8ABw== X-Gm-Message-State: AOPr4FW+Jeq3q+tQdacUSHyHSbr27i0oHuwGnTsctxR5wmnkxIakFq2HEm7df+/CeRBzSOwNYWvbFsq2nU/beg== MIME-Version: 1.0 X-Received: by 10.202.44.19 with SMTP id s19mr9444319ois.78.1461937206396; Fri, 29 Apr 2016 06:40:06 -0700 (PDT) Received: by 10.202.215.215 with HTTP; Fri, 29 Apr 2016 06:40:05 -0700 (PDT) Received: by 10.202.215.215 with HTTP; Fri, 29 Apr 2016 06:40:05 -0700 (PDT) In-Reply-To: References: <93CD8F2B-BCE2-48D8-A237-46BE15FF3288@gmail.com> <32458841-9CB6-42DD-A467-D760CBC3BB69@gmail.com> Date: Fri, 29 Apr 2016 06:40:05 -0700 Message-ID: Subject: Re: Error loading data From: Mike Carey To: users@asterixdb.incubator.apache.org Content-Type: multipart/alternative; boundary=001a1137baaedf0c6305319fc709 --001a1137baaedf0c6305319fc709 Content-Type: text/plain; charset=UTF-8 That's bizarre ... Not at all what others see for load behavior. I wonder what could be going wrong here....? On Apr 29, 2016 6:27 AM, "Magnus Kongshem" wrote: > Well, it works, but it takes forever. Initializing my collection with 20 > GB of data takes about 1,5 hour. Adding 5GB of new data to the collection > "never" completes. It has working now for 48 hours and only managed to > insert 1/3 of the 5GB. I had to abort it to see what actually managed to be > inserted. > > BG, > > Magnus > > On Tue, Apr 19, 2016 at 8:01 PM, Magnus Kongshem > wrote: > >> >> Ah, I see, should have realized that myself. I will test it first thing >> in the morning. >> >> -- >> Mvh >> Magnus Kongshem >> 41565906 >> >> 19. apr. 2016 19:11 skrev "Ildar Absalyamov" > >: >> > >> > Magnus, >> > >> > Since you are using autogenerated key the inserted record should not >> have that field. To do that you need to change the record format in the >> return clause: >> > >> > insert into dataset posdata( >> > for $x in dataset posdata_temp return { >> > "campus": $x.campus, >> > "building": $x.building, >> > "floor": $x.floor, >> > "timestamp": $x.timestamp, >> > "dayOfWeek": $x.day, >> > "hourOfDay": $x.hour, >> > "latitude": $x.latitude, >> > "salt_timestamp": $x.salt, >> > "longitude": $x.longitude, >> > "id": $x.id, >> > "accuracy": $x.accuracy >> > } >> > ) >> > >> >> On Apr 19, 2016, at 04:54, Magnus Kongshem >> wrote: >> >> >> >> Your suggestion does not work because you get duplicate fields. >> >> >> >> Exception: Duplicate field "uuid" encountered >> [AlgebricksException] >> >> >> >> Any other suggestions? This is a major issue in my view, and as Mike >> Carey said: It should be easy and seamless to add more data to the dataset. >> >> >> >> BG, >> >> Magnus >> >> >> >> On Thu, Apr 14, 2016 at 6:34 PM, Ildar Absalyamov < >> ildar.absalyamov@gmail.com> wrote: >> >>> >> >>> Magnus, >> >>> >> >>> You could still add data to non-empty dataset via inserts. >> >>> The easiest way to do that, granted you have the data you want to >> insert in a files, is to bulkhead data to new temp dataset and insert it to >> the desired dataset: >> >>> >> >>> create dataset posdata_temp(table) primary key uid auto generated; >> >>> load dataset posdata_temp using localfs >> "path"="localhost:///data/path/to/file/file.adm,localhost:///data/path/to/file/file2.adm,localhost:///data/path/to/file/file3.adm"),("format"="adm")); >> >>> insert into dataset posdata( >> >>> for $x in dataset posdata_temp return $x >> >>> ) >> >>> >> >>>> On Apr 14, 2016, at 07:41, Magnus Kongshem >> wrote: >> >>>> >> >>>> Does this mean that adding additional data to an instance and >> dataverse is not supported? >> >>>> >> >>>> Magnus >> >>>> >> >>>> On Wed, Mar 30, 2016 at 8:11 PM, Ian Maxon wrote: >> >>>>> >> >>>>> It should just be a quoted string with commas inside separating the >> URL-ish paths, so like: >> >>>>> >> >>>>> load dataset foo using localfs >> "path"="localhost:///data/path/to/file/file.adm,localhost:///data/path/to/file/file2.adm,localhost:///data/path/to/file/file3.adm"),("format"="adm")); >> >>>>> >> >>>>> On Wed, Mar 30, 2016 at 6:24 AM, Magnus Kongshem < >> kongshem@stud.ntnu.no> wrote: >> >>>>>> >> >>>>>> Yes I am. >> >>>>>> >> >>>>>> So, combinding each file will and doing the command once will >> solve it, or do I have to input the AQL for each file like below? >> >>>>>> >> >>>>>> use dataverse bigd; >> >>>>>> load dataset posdata using localfs >> >>>>>> >> (("path"="localhost:///data/path/to/file/file.adm"),("format"="adm")); >> >>>>>> >> (("path"="localhost:///data/path/to/file/file2.adm"),("format"="adm")); >> >>>>>> >> (("path"="localhost:///data/path/to/file/file3.adm"),("format"="adm")); >> >>>>>> >> >>>>>> >> >>>>>> BG >> >>>>>> Magnus >> >>>>>> >> >>>>>> >> >>>>>> On Wed, Mar 30, 2016 at 3:21 PM, Wail Alkowaileet < >> wael.y.k@gmail.com> wrote: >> >>>>>>> >> >>>>>>> Are you trying to load each file separately? >> >>>>>>> That AFAK is not supported. >> >>>>>>> >> >>>>>>> On Mar 30, 2016 16:13, "Magnus Kongshem" >> wrote: >> >>>>>>>> >> >>>>>>>> I will be loading 12 files. >> >>>>>>>> >> >>>>>>>> AQL below: >> >>>>>>>> >> >>>>>>>> use dataverse bigd; >> >>>>>>>> load dataset posdata using localfs >> >>>>>>>> >> (("path"="localhost:///data/path/to/file/file.adm"),("format"="adm")); >> >>>>>>>> >> >>>>>>>> Will it be solved if I concatinate the files and do the dataset >> loading only once? >> >>>>>>>> >> >>>>>>>> Magnus >> >>>>>>>> >> >>>>>>>> On Wed, Mar 30, 2016 at 3:06 PM, Wail Alkowaileet < >> wael.y.k@gmail.com> wrote: >> >>>>>>>>> >> >>>>>>>>> How many files you're loading? >> >>>>>>>>> Can you send the loading AQL? >> >>>>>>>>> >> >>>>>>>>> On Mar 30, 2016 16:01, "Magnus Kongshem" >> wrote: >> >>>>>>>>>> >> >>>>>>>>>> Using asterixdb v0.8.8. >> >>>>>>>>>> >> >>>>>>>>>> I am loading data into my asterixDB instance. >> >>>>>>>>>> >> >>>>>>>>>> Loading the first file is successful. But when I try to load >> another file, I get a "Internal error. Please check instance logs for >> further details. [NullPointerException]" >> >>>>>>>>>> >> >>>>>>>>>> The files are of the type adm and as good as equal in size (3 >> gb). >> >>>>>>>>>> >> >>>>>>>>>> My instance was initialized with these commands: >> >>>>>>>>>> >> >>>>>>>>>> drop dataverse bigd if exists; >> >>>>>>>>>> create dataverse bigd; >> >>>>>>>>>> use dataverse bigd; >> >>>>>>>>>> >> >>>>>>>>>> create type table as open { >> >>>>>>>>>> uid: uuid, >> >>>>>>>>>> campus: string, >> >>>>>>>>>> building: string, >> >>>>>>>>>> floor: string, >> >>>>>>>>>> timestamp: int32, >> >>>>>>>>>> dayOfWeek: int32, >> >>>>>>>>>> hourOfDay: int32, >> >>>>>>>>>> latitude: double, >> >>>>>>>>>> salt_timestamp: int32, >> >>>>>>>>>> longitude: double, >> >>>>>>>>>> id: string, >> >>>>>>>>>> accuracy: double >> >>>>>>>>>> } >> >>>>>>>>>> create dataset posdata(table) >> >>>>>>>>>> primary key uid autogenerated; >> >>>>>>>>>> create index stamp on posdata(timestamp); >> >>>>>>>>>> create index hour on posdata(hourOfDay); >> >>>>>>>>>> create index day on posdata(dayOfWeek); >> >>>>>>>>>> >> >>>>>>>>>> My log file is attached. >> >>>>>>>>>> >> >>>>>>>>>> Any help? >> >>>>>>>>>> >> >>>>>>>>>> -- >> >>>>>>>>>> >> >>>>>>>>>> Mvh >> >>>>>>>>>> >> >>>>>>>>>> Magnus Kongshem >> >>>>>>>>>> >> >>>>>>>>>> NTNU >> >>>>>>>>>> +47 415 65 906 >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> -- >> >>>>>>>> >> >>>>>>>> Mvh >> >>>>>>>> >> >>>>>>>> Magnus Alderslyst Kongshem >> >>>>>>>> Leder av seniorkomiteen >> >>>>>>>> Online, linjeforeningen for informatikk >> >>>>>>>> +47 415 65 906 >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> -- >> >>>>>> >> >>>>>> Mvh >> >>>>>> >> >>>>>> Magnus Alderslyst Kongshem >> >>>>>> Leder av seniorkomiteen >> >>>>>> Online, linjeforeningen for informatikk >> >>>>>> +47 415 65 906 >> >>>>> >> >>>>> >> >>>> >> >>>> >> >>>> >> >>>> -- >> >>>> >> >>>> Mvh >> >>>> >> >>>> Magnus Alderslyst Kongshem >> >>>> Leder av seniorkomiteen >> >>>> Online, linjeforeningen for informatikk >> >>>> +47 415 65 906 >> >>> >> >>> >> >>> Best regards, >> >>> Ildar >> >>> >> >> >> >> >> >> >> >> -- >> >> >> >> Mvh >> >> >> >> Magnus Alderslyst Kongshem >> >> Leder av seniorkomiteen >> >> Online, linjeforeningen for informatikk >> >> +47 415 65 906 >> > >> > >> > Best regards, >> > Ildar >> > >> > > > > -- > > Mvh > > Magnus Alderslyst Kongshem > Seniorkomiteen > Online, linjeforeningen for informatikk > +47 415 65 906 > --001a1137baaedf0c6305319fc709 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

That's bizarre ... Not at all what others see for load b= ehavior.=C2=A0 I wonder what could be going wrong here....?

On Apr 29, 2016 6:27 AM, "Magnus Kongshem&q= uot; <kongshem@online.ntnu.no= > wrote:
Well, it works, but it takes forever. Initializing my collecti= on with 20 GB of data takes about 1,5 hour. Adding 5GB of new data to the c= ollection "never" completes. It has working now for 48 hours and = only managed to insert 1/3 of the 5GB. I had to abort it to see what actual= ly managed to be inserted.

BG,

= Magnus

On Tue,= Apr 19, 2016 at 8:01 PM, Magnus Kongshem <kongshem@online.ntnu.no> wrote:


Ah, I see, should have realized that myself. I will test it first thing in = the morning.

--
Mvh
Magnus Kongshem
41565906

19. apr. 2016 19:11 skrev "Ildar Absalyamov" <<= a href=3D"mailto:ildar.absalyamov@gmail.com" target=3D"_blank">ildar.absaly= amov@gmail.com>:
>
> Magnus,
>
> Since you are using autogenerated key the inserted record should not h= ave that field. To do that you need to change the record format in the retu= rn clause:
>
> insert into dataset posdata(
> =C2=A0 for $x in dataset posdata_temp=C2=A0return {
> "campus": $x.campus,
> "building": $x.building,
> "floor": $x.floor,
> "timestamp": $x.timestamp,
> "dayOfWeek": $x.day,
> "hourOfDay": $x.hour,
> "latitude": $x.latitude,
> "salt_timestamp": $x.salt,
> "longitude": $x.longitude,
> "id": $x.id, > "accuracy": $x.accuracy
> }
> )
>
>> On Apr 19, 2016, at 04:54, Magnus Kongshem <kongshem@stud.ntnu.no> wrote= :
>>
>> Your suggestion does not work because you get duplicate fields.=C2= =A0
>>
>> Exception:=C2=A0Duplicate field &quot;uuid&quot; encounter= ed [AlgebricksException]
>>
>> Any other suggestions? This is a major issue in my view, and as Mi= ke Carey said: It should be easy and seamless to add more data to the datas= et.
>>
>> BG,
>> Magnus
>>
>> On Thu, Apr 14, 2016 at 6:34 PM, Ildar Absalyamov <ildar.absalyamov@gmail.= com> wrote:
>>>
>>> Magnus,
>>>
>>> You could still add data to non-empty dataset via inserts.
>>> The easiest way to do that, granted you have the data you want= to insert in a files, is to bulkhead data to new temp dataset and insert i= t to the desired dataset:
>>>
>>> create dataset posdata_temp(table) primary key uid auto genera= ted;
>>> load dataset=C2=A0posdata_temp=C2=A0using localfs "path&q= uot;=3D"localhost:///data/path/to/file/file.adm,localhost:///data/path= /to/file/file2.adm,localhost:///data/path/to/file/file3.adm"),("f= ormat"=3D"adm"));
>>> insert into dataset posdata(
>>> =C2=A0 for $x in dataset posdata_temp=C2=A0return $x
>>> )
>>>
>>>> On Apr 14, 2016, at 07:41, Magnus Kongshem <kongshem@stud.ntnu.no&g= t; wrote:
>>>>
>>>> Does this mean that adding additional data to an instance = and dataverse is not supported?
>>>>
>>>> Magnus
>>>>
>>>> On Wed, Mar 30, 2016 at 8:11 PM, Ian Maxon <imaxon@uci.edu> wrote:
>>>>>
>>>>> It should just be a quoted string with commas inside s= eparating the URL-ish paths, so like:
>>>>>
>>>>> load dataset foo using localfs "path"=3D&quo= t;localhost:///data/path/to/file/file.adm,localhost:///data/path/to/file/fi= le2.adm,localhost:///data/path/to/file/file3.adm"),("format"= =3D"adm"));
>>>>>
>>>>> On Wed, Mar 30, 2016 at 6:24 AM, Magnus Kongshem <<= a href=3D"mailto:kongshem@stud.ntnu.no" target=3D"_blank">kongshem@stud.ntn= u.no> wrote:
>>>>>>
>>>>>> Yes I am.
>>>>>>
>>>>>> So, combinding each file will and doing the comman= d once will solve it, or do I have to input the AQL for each file like belo= w?
>>>>>>
>>>>>> use dataverse bigd;
>>>>>> load dataset posdata using localfs
>>>>>> =C2=A0 =C2=A0 (("path"=3D"localhost= :///data/path/to/file/file.adm"),("format"=3D"adm"= ));
>>>>>> (("path"=3D"localhost:///data/path/= to/file/file2.adm"),("format"=3D"adm"));
>>>>>> (("path"=3D"localhost:///data/path/= to/file/file3.adm"),("format"=3D"adm"));
>>>>>>
>>>>>>
>>>>>> BG
>>>>>> Magnus
>>>>>>
>>>>>>
>>>>>> On Wed, Mar 30, 2016 at 3:21 PM, Wail Alkowaileet = <wael.y.k@gmail.= com> wrote:
>>>>>>>
>>>>>>> Are you trying to load each file separately? >>>>>>> That AFAK is not supported.
>>>>>>>
>>>>>>> On Mar 30, 2016 16:13, "Magnus Kongshem&q= uot; <kongshe= m@stud.ntnu.no> wrote:
>>>>>>>>
>>>>>>>> I will be loading 12 files.
>>>>>>>>
>>>>>>>> AQL below:
>>>>>>>>
>>>>>>>> use dataverse bigd;
>>>>>>>> load dataset posdata using localfs
>>>>>>>> =C2=A0 =C2=A0 (("path"=3D"l= ocalhost:///data/path/to/file/file.adm"),("format"=3D"a= dm"));
>>>>>>>>
>>>>>>>> Will it be solved if I concatinate the fil= es and do the dataset loading only once?
>>>>>>>>
>>>>>>>> Magnus
>>>>>>>>
>>>>>>>> On Wed, Mar 30, 2016 at 3:06 PM, Wail Alko= waileet <wael.y.= k@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> How many files you're loading?
>>>>>>>>> Can you send the loading AQL?
>>>>>>>>>
>>>>>>>>> On Mar 30, 2016 16:01, "Magnus Ko= ngshem" <kongshem@stud.ntnu.no> wrote:
>>>>>>>>>>
>>>>>>>>>> Using asterixdb v0.8.8.
>>>>>>>>>>
>>>>>>>>>> I am loading data into my asterixD= B instance.
>>>>>>>>>>
>>>>>>>>>> Loading the first file is successf= ul. But when I try to load another file, I get a "Internal error. Plea= se check instance logs for further details. [NullPointerException]" >>>>>>>>>>
>>>>>>>>>> The files are of the type adm and = as good as equal in size (3 gb).
>>>>>>>>>>
>>>>>>>>>> My instance was initialized with t= hese commands:
>>>>>>>>>>
>>>>>>>>>> drop dataverse bigd if exists;
>>>>>>>>>> =C2=A0 =C2=A0 create dataverse big= d;
>>>>>>>>>> =C2=A0 =C2=A0 use dataverse bigd;<= br> >>>>>>>>>>
>>>>>>>>>> =C2=A0 =C2=A0 create type table as= open {
>>>>>>>>>> uid: uuid,
>>>>>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 campus= : string,
>>>>>>>>>> building: string,
>>>>>>>>>> floor: string,
>>>>>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 timest= amp: int32,
>>>>>>>>>> dayOfWeek: int32,
>>>>>>>>>> hourOfDay: int32,
>>>>>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 latitu= de: double,
>>>>>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 salt_t= imestamp: int32,
>>>>>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 longit= ude: double,
>>>>>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 id: st= ring,
>>>>>>>>>> accuracy: double
>>>>>>>>>> =C2=A0 =C2=A0 }
>>>>>>>>>> create dataset posdata(table)
>>>>>>>>>> =C2=A0 =C2=A0 primary key uid auto= generated;
>>>>>>>>>> create index stamp on posdata(time= stamp);
>>>>>>>>>> create index hour on posdata(hourO= fDay);
>>>>>>>>>> create index day on posdata(dayOfW= eek);
>>>>>>>>>>
>>>>>>>>>> My log file is attached.
>>>>>>>>>>
>>>>>>>>>> Any help?
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Mvh
>>>>>>>>>>
>>>>>>>>>> Magnus Kongshem
>>>>>>>>>>
>>>>>>>>>> NTNU
>>>>>>>>>> +47 415 65 906
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Mvh
>>>>>>>>
>>>>>>>> Magnus Alderslyst Kongshem
>>>>>>>> Leder av seniorkomiteen
>>>>>>>> Online, linjeforeningen for informatikk >>>>>>>> +47 415 65 906
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Mvh
>>>>>>
>>>>>> Magnus Alderslyst Kongshem
>>>>>> Leder av seniorkomiteen
>>>>>> Online, linjeforeningen for informatikk
>>>>>> +47 415 65 906
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Mvh
>>>>
>>>> Magnus Alderslyst Kongshem
>>>> Leder av seniorkomiteen
>>>> Online, linjeforeningen for informatikk
>>>> +47 415 65 906
>>>
>>>
>>> Best regards,
>>> Ildar
>>>
>>
>>
>>
>> --
>>
>> Mvh
>>
>> Magnus Alderslyst Kongshem
>> Leder av seniorkomiteen
>> Online, linjeforeningen for informatikk
>> +47 415 65 906
>
>
> Best regards,
> Ildar
>




--
=

<= span style=3D"font-family:arial,helvetica,sans-serif">Mvh

Magnus Alderslyst Kongshem
Seniorkomiteen
Online, linjeforeningen fo= r informatikk
+47 415 65 906

--001a1137baaedf0c6305319fc709--