OSDC 2018 Recap: The Archive is online


 
Open Source Data Conference| JUNE 12 – 13, 2018 | BERLIN
Hello Open Source Lovers,
as we know all of you are most of the time focused on the things to come: An upcoming release or a future-oriented project you are working on. But as Plato said, “twice and thrice over, as they say, good is it to repeat and review what is good.”

OSDC Archive is online, giving us the occasion to do so.

OSDC 2018 was a blast! Austria, Belgium, Germany, Great Britain, Italy, Netherlands, Sweden, Switzerland, Spain, Czech Republic and the USA: 130 participants from all over the world came to Berlin to exchange thoughts and discuss the future of open source data center solutions.
„Extending Terraform“, „Monitoring Kubernetes at scale“, „Puppet on the Road to Pervasive Automation“ and „Migrating to the Cloud“ were just a few of many, trailblazing topics: We are overwhelmed by the content and quality of speakers, who presented a really interesting range of technologies and use cases, experience reports and different approaches. No matter what topic, all of the 27 high-level speakers had one goal: Explain why and how and share their lessons learned. And by doing so they all contributed to Simplifying Complex Infrastructures with Open Source.
Besides the lectures the attendees joined in thrilling discussions and a phenomenal evening event with a mesmerizing view over the city of Berlin.
A big thank you to our speakers and sponsors and to all participants, who made the event special. We are already excited about next year’s event – OSDC 2019, on May 14 to 15 in Berlin. Save the date!
And now, take a moment, get yourself a cup of coffee, lean back and recap the 2018 conference. Have a look at videos and slides and photos.

Julia Hornung
Julia Hornung
Marketing Manager

Julia ist seit Juni 2018 Mitglied der NETWAYS Family. Vor ihrer Zeit in unserem Marketing Team hat sie als Journalistin und in der freien Theaterszene gearbeitet. Ihre Leidenschaft gilt gutem Storytelling, klarer Sprache und ausgefeilten Texten. Privat widmet sie sich dem Klettern und ihrer Ausbildung zur Yogalehrerin.

OSDC 2018 – Get ready for Berlin!


Expect exciting discussions, an intensive exchange of experiences, inspiring encounters and great networking opportunities!
Spend two extraordinary summer days in Berlin, the capital of OS data center solutions! Check the 2018 conference agenda. And get an update on the OSDC speakers line-up!
OSDC is about catching up on the latest developments and initiating forward-looking projects. Meet developers, decision-makers, administrators, architects and of course the unique OS community. Benefit from the comprehensive experience of international OS experts and engage with a wide range of industry leaders.
We will kick off the conference with an informal get-together on the evening of June 11.
The two-day lecture-program on June 12 and 13 is composed of presentations on the latest research, findings and fresh approaches. Get updated on the latest developments in OS data center solutions and add on to your knowledge learning from Open Source enthusiasts from all over the world.
We are especially looking forward to the evening event on June 12. Enjoy dinner and drinks at PURO and enjoy the spectacular view over Berlin from the 20th floor. Take the opportunity to exchange experiences, pick up some new ideas and socialize with the community!
Have a cool time full of exciting discussions and inspiring encounters — GET YOUR TICKET NOW!
See you in Berlin on June 12 to 13!
 

Pamela Drescher
Pamela Drescher
Head of Marketing

Pamela hat im Dezember 2015 das Marketing bei NETWAYS übernommen. Sie ist für die Corporate Identity unserer Veranstaltungen sowie von NETWAYS insgesamt verantwortlich. Die enge Zusammenarbeit mit Events ergibt sich aus dem Umstand heraus, dass sie vor ein paar Jahren mit Markus zusammen die Eventsabteilung geleitet hat und diese äußerst vorzügliche Zusammenarbeit nun auch die Bereiche Events und Marketing noch enger verknüpft. Privat ist sie Anführerin einer vier Mitglieder starken Katzenhorde, was ihr den absolut...

Open Source Data Center Conference – Speakers 2018


We are happy to announce our Open Source Data Center Conference 2018 Speakers line-up. We are impressed by all the expert knowledge proposals from across the various countries.
With the main subject “SIMPLIFYING COMPLEX IT INFRASTRUCTURES WITH OPEN SOURCE” for discussion, we have selected a variety of expert proposals from renowned speaker for you.
As OSDC 2108 head speaker Mitchell Hashimoto, CEO of HashiCorp, who will enlighten us on “Update on Terraform”.
We proudly announce our OSDC speakers list of 2018 ! Amongst others the conference program will be enriched by:
Akmal Chaudhri | GridGain System | Apache Ignite: the in-memory hammer in your data science toolkit
Juaristi Alamos | Senior Security Researcher | Hitchhiker’s guide to TLS 1.3 and GnuTLSAnder
Max Neunhöffer | Developer – ArangoDB | The Computer Science behind a modern distributed data store
Cornelius Schumacher | Engineer – SUSE Linux  | Highly Available Cloud Foundry on Kubernetes
Mike Place | SaltStack | Introduction to SaltStack in the modern data center
Gianluca Arbezzano | InfluxData | Distributed monitoring
Jan-Piet Mens | Independent Unix/Linux Consultant and Sysadmin | Introducing Ansible AWX, the Open Source “Tower”
Look also forward to excellent talks with topics as Modern Data Center, Data Science Toolkit, Highly Available Cloud Foundry on Kubernetes, Providing Supporting Docker Images, From Monolith to Microservices and many more.
To listen and learn from experts we welcome you to be a part of international OSDC 2018. Get your seats booked. We are looking forward to seeing you in Berlin.
Don’t miss it!

Keya Kher
Keya Kher
Marketing Manager

Keya ist seit Oktober 2017 in unserem Marketing Team. Sie kennt sich mit Social Media Marketing aus und ist auf dem Weg, ein Grafikdesign-Profi zu werden. Wenn sie sich nicht kreativ auslebt, entdeckt sie andere Städte oder schmökert in einem Buch. Ihr Favorit ist “The Shiva Trilogy”.  

OSDC 2017 – What a great week full of open source!

Over the weekend, we caught up on missed sleep and we were really happy about the successful Open Source Data Center Conference last week in Berlin.
The OSDC 2017 began with our workshop day on Tuesday with „Graylog-Centralized Log Management“, „Mesos Marathon – Orchestrating Docker Containers“ and „Terraform – Infrastructure as Code“.
On Wednesday and Thursday attendees could join 23 interesting talks on case studies, the latest developments and best practices. CONTAINERS AND MICROSERVICES | CONFIGURATION MANAGEMENT | TESTING, METICS AND ANALYSIS and TOOLS&INFRASTRUCTURE were forming the core of the conference! Details about the talks, you can get in Michi’s and Dirk’s Blogposts.
On Wednesday evening, we went to the Umspannwerk Ost. There was much sun, and so we could all sit outside and discuss the exciting days. Furthermore, it was enough time for networking, establishing contacts and becoming more familiar with the open source community!
After the conference was gone on Thursday, we were happy to meet you all in Berlin and also a little bit sad, because three exciting conference days came to an end.
At this point, it is time to say a cordial THANK YOU!
Thanks to our speakers who made us laugh and who gave us so much knowledge!
Thanks to our sponsors for the wonderful support and your confidence!
Thanks to our attendees for making the OSDC unique!
We’ll hope to see you all next year! The date for 2018 is already fixed.
The pictures, slides and videos of the OSDC will be available soon!
OSDC 2018 | June 14 – 16, 2018 | Berlin

OSDC 2017 – How it went on!

After the talks on Wednesday were finished, two OSDC-VIP buses stood in front of the MOA Hotel. After all attendees found their seats, we drove threw the whole city and finally reached the Umspannwerk Ost.
Bright Sunshine, perfect. As in the last year, there was a huge variety of culinary delights. Due to the bright sunshine till the evening hours, most of us sat outside the listed building, which is the oldest substation in Berlin. With some soft drinks and yummy food (look at the pictures), the evening ran its course.
As a little surprise for the attendees, we organised a kicker. Not a standard kicker, but a kicker for more than four persons. It was really funny!
And so the hours passed until the third shuttle VIP-Bus brought all of us back to the conference hotel. After a very short night, the talks for today started on time. What the talks are about, you can find out in Michi’s Blogpost after the conference has been finished.
For our events team, it’s now the final spurt, before the post processing may start tomorrow. We hope, all attendees have an interesting second conference day and a save journey home!
SAVE THE DATE FOR 2018 | June 12-14
 

OSDC 2017 – How it all began

On Monday evening, there was a group of some very excited NETWAYS guys, who arrived in Berlin to prepare the OSDC. After the rooms were ready for the workshops on Tuesday, the pizza for our busy bees was definetely rewarded. Then it was still very late and so they all fell into a deep deep sleep before the bewitched bell was ringing again. Then at 10 o’ clock, our Workshops started. There were „Graylog – Centralized Log Management“ by Jan Doberstein and Bernd Ahlers, „Terraform – Infrastructure as Code“ by Seth Vargo and „Mesos Marathon – Orchestrating DOcker Containers“ by Gabriel Hartmann. The attendees learned a lot and we hope theres a little space left for the talks on Wednesday are Thursday! Then the NETWAYS – Crew started with the last preparations for Wednesday and then the first conference day was already gone! After a joyful night with our beloved Tele-Inder (classic Berlin Späti), the conference started with Bernd’s Opening and talks. What the talks are about, you can read in Dirks Blogpost! But we’ll only say this much: It was equally interesting new things and fun! ? The pictures will follow!
 

Ready, Steady, Go! — The faster the better!


This is your LAST CHANCE to be part of the best open source conference this May in Berlin!
On May 16 to 18 it’s all about open source data center solutions for complex IT infrastructures once again. Three days of hands-on workshops, presentations and social networking in a super relaxed atmosphere with a bunch of really great people is what you can expect. The 2017 main conference topics are
Containers and Microservices
Configuration Management
Testing, Metrics and Analyses
Tools  & Infrastructure
Join the open source community, learn from well-known data center experts, get the latest know-how for your daily business and meet international open source professionals.
So hurry up if you want to grab one of the last remaining tickets for OSDC 2017 and register now at www.osdc.de!

Pamela Drescher
Pamela Drescher
Head of Marketing

Pamela hat im Dezember 2015 das Marketing bei NETWAYS übernommen. Sie ist für die Corporate Identity unserer Veranstaltungen sowie von NETWAYS insgesamt verantwortlich. Die enge Zusammenarbeit mit Events ergibt sich aus dem Umstand heraus, dass sie vor ein paar Jahren mit Markus zusammen die Eventsabteilung geleitet hat und diese äußerst vorzügliche Zusammenarbeit nun auch die Bereiche Events und Marketing noch enger verknüpft. Privat ist sie Anführerin einer vier Mitglieder starken Katzenhorde, was ihr den absolut...

OSDC – News compact

We hereby proudly present our new OSDC-App! Everything that you need to know can be found in it. The program, the evening event, the workshops, directions and information on the speakers. In the OSDC- App you will find everything an attendee may desire. Except from food, but you will get that in the hotel.
Apart from the great new App we have loads of OSDC-news for you! The conference takes place soon, so it is definitely about time that we explain why you should come to Berlin, and what makes the OSDC so special.
The first day launches off with three workshops:

Graylog – Centralized Log Management | Terraform – Infrastructure as Code | Mesos Marathon – Orchestrating Docker Containers

We emphasize the importance of small groups, limited to 12 persons, to ensure an optimal learning atmosphere. Thereby the coaches are able to react to individual needs and requests.
The second and third conference- day are fully packed with top-notch talks. Among others the companies Elastic, Chef, CoreOS, RedHat, Hashi Corp and Travis CL will be represented. Here a small taste of our program:

Seth Vargo (Hashi Corp)| Modern Secrets Management with Vault
Mandi Walls (Chef)| Building Security Into Your Workflow with InSpec
Casey Callendrello (CoreOS) | The evolution oft he Container Network Interface
Monica Sarbu (Elastic) | Collecting the right Data to monitor your infrastructure
James Shubin (RedHat) | Mgmt Config: Autonomous systems
Mathias Meyer (Travis Cl) | Build the Home of Open Source Testing, Without the Datacenter

In the evening of the second conference day, there will take place a special evening event , which gives a special charm to the conference. There will be enough time to review the day and meet each other for interesting discussions.
And of course many many more! The other speakers can be found on www.osdc.de and of course in our fabulous app!

Ceph – CRUSH rules über die CLI

Über die CRUSH Map ist es möglich zu beinflussen wie Ceph seine Objekte repliziert und verteilt. Die Standard CRUSH Map verteilt die Daten, sodass jeweils nur eine Kopie per Host abgelegt wird.
Wenn nun ein Ceph Cluster andere Prioritäten voraussieht, bspw. Hosts sich ein Netz, oder ein Rack mit gleicher Stromversorgung teilen, oder im gleichen Rechenzentrum stehen, sprich die Failure Domains anders aufgeteilt sind, ist es möglich diese Abhängigkeiten in der CRUSH Map zu berücksichtigen.
Beispielsweise wollen wir unseren Cluster mit einer Replikation von 3 auf eine 2er Replikation zurücksetzen. Da sich jedoch 2 Hosts einen Rack teilen, wollen wir das auch in unserer CRUSH Map abbilden und das über die CLI:
Ausgangslage:

[root@box12 ~]# ceph osd tree
ID WEIGHT  TYPE NAME                       UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6 0.33110 datacenter dc03                                                   
-1 0.33110     root datacenter01                                             
-5 0.33110         datacenter datacenter02                                   
-4 0.11037             host box14                                            
 6 0.03679                 osd.6                up  1.00000          1.00000
 7 0.03679                 osd.7                up  1.00000          1.00000
 8 0.03679                 osd.8                up  1.00000          1.00000
-3 0.11037             host box13                                            
 3 0.03679                 osd.3                up  1.00000          1.00000
 4 0.03679                 osd.4                up  1.00000          1.00000
 5 0.03679                 osd.5                up  1.00000          1.00000
-2 0.11037             host box12                                            
 0 0.03679                 osd.0                up  1.00000          1.00000
 1 0.03679                 osd.1                up  1.00000          1.00000
 2 0.03679                 osd.2                up  1.00000          1.00000

Wir erstellen die beiden Racks:

[root@box12 ~]# ceph osd crush add-bucket rack1 rack
added bucket rack1 type rack to crush map
[root@box12 ~]# ceph osd crush add-bucket rack2 rack
added bucket rack2 type rack to crush map

Die Racks wurden erstellt:

[root@box12 ~]# ceph osd tree
ID  WEIGHT  TYPE NAME                       UP/DOWN REWEIGHT PRIMARY-AFFINITY                                                      
 -8       0 rack rack2                                                        
 -7       0 rack rack1                                                        
 -6 0.33110 datacenter dc03                                                   
 -1 0.33110     root datacenter01                                             
 -5 0.33110         datacenter datacenter02                                   
 -4 0.11037             host box14                                            
  6 0.03679                 osd.6                up  1.00000          1.00000
  7 0.03679                 osd.7                up  1.00000          1.00000
  8 0.03679                 osd.8                up  1.00000          1.00000
 -3 0.11037             host box13                                            
  3 0.03679                 osd.3                up  1.00000          1.00000
  4 0.03679                 osd.4                up  1.00000          1.00000
  5 0.03679                 osd.5                up  1.00000          1.00000
 -2 0.11037             host box12                                            
  0 0.03679                 osd.0                up  1.00000          1.00000
  1 0.03679                 osd.1                up  1.00000          1.00000
  2 0.03679                 osd.2                up  1.00000          1.00000

Nun verschieben wir die Hosts 14 & 13 nach Rack1 und 12 nach Rack2:

[root@box12 ~]# ceph osd crush move box14 rack=rack1
moved item id -4 name 'box14' to location {rack=rack1} in crush map
[root@box12 ~]# ceph osd crush move box13 rack=rack1
moved item id -3 name 'box13' to location {rack=rack1} in crush map
[root@box12 ~]# ceph osd crush move box12 rack=rack2
moved item id -2 name 'box12' to location {rack=rack2} in crush map

Und die Racks in das Rechenzentrum(datacenter02):

[root@box12 ~]# ceph osd crush move  rack1 datacenter=datacenter02
moved item id -7 name 'rack1' to location {datacenter=datacenter02} in crush map
[root@box12 ~]# ceph osd crush move  rack2 datacenter=datacenter02
moved item id -8 name 'rack2' to location {datacenter=datacenter02} in crush map

Das ganze sieht dann so aus:

[root@box12 ~]# ceph osd tree
ID  WEIGHT  TYPE NAME                       UP/DOWN REWEIGHT PRIMARY-AFFINITY                                                       
 -6 0.33110 datacenter dc03                                                   
 -1 0.33110     root datacenter01                                             
 -5 0.33110         datacenter datacenter02                                   
 -7 0.22073             rack rack1                                            
 -4 0.11037                 host box14                                        
  6 0.03679                     osd.6            up  1.00000          1.00000
  7 0.03679                     osd.7            up  1.00000          1.00000
  8 0.03679                     osd.8            up  1.00000          1.00000
 -3 0.11037                 host box13                                        
  3 0.03679                     osd.3            up  1.00000          1.00000
  4 0.03679                     osd.4            up  1.00000          1.00000
  5 0.03679                     osd.5            up  1.00000          1.00000
 -8 0.11037             rack rack2                                            
 -2 0.11037                 host box12                                        
  0 0.03679                     osd.0            up  1.00000          1.00000
  1 0.03679                     osd.1            up  1.00000          1.00000
  2 0.03679                     osd.2            up  1.00000          1.00000

Im nächsten Schritt lassen wir uns automatisch eine CRUSH Rule erstellen und ausgeben:

[root@box12 ~]# ceph osd crush rule create-simple ceph-blog datacenter01 rack
[root@box12 ~]# ceph osd crush rule ls
[
    "ceph-blog",
    "test03"
]

‘datacenter01 rack’ sagt hier, dass beim datacenter01 begonnen werden soll und alle Kindknoten(leaf) vom Typ rack ausgewählt werden sollen.
Wir lassen uns die CRUSH Rule ausgeben:

[root@box12 ~]# ceph osd crush rule dump ceph-blog
{
    "rule_id": 0,
    "rule_name": "ceph-blog",
    "ruleset": 0,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -1,
            "item_name": "datacenter01"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "rack"
        },
        {
            "op": "emit"
        }
    ]
}

Sieht gut aus.
Der Pool rbd soll die Rule anwenden:

[root@box12 ~]# ceph osd pool set rbd crush_ruleset 0
set pool 0 crush_ruleset to 0

Funktioniert’s?

[root@box12 ~]# ceph osd map rbd test
osdmap e421 pool 'rbd' (0) object 'test' -> pg 0.40e8aab5 (0.b5) -> up ([4,0], p4) acting ([4,0,6], p4)

Das test Objekt wird weiterhin über die 3 Hosts verteilt.
Wir setzen die Replikation von 3 auf 2:

[root@box12 ~]# ceph osd pool get rbd size
size: 3
[root@box12 ~]# ceph osd pool set rbd size 2
set pool 0 size to 2

Ceph verteilt die Objekte. Nur Geduld:

[root@box12 ~]# ceph -s
    cluster e4d48d99-6a00-4697-b0c5-4e9b3123e5a3
     health HEALTH_ERR
            60 pgs are stuck inactive for more than 300 seconds
            60 pgs peering
            60 pgs stuck inactive
            27 pgs stuck unclean
            recovery 3/45 objects degraded (6.667%)
            recovery 3/45 objects misplaced (6.667%)
     monmap e4: 3 mons at {box12=192.168.33.22:6789/0,box13=192.168.33.23:6789/0,box14=192.168.33.24:6789/0}
            election epoch 82, quorum 0,1,2 box12,box13,box14
     osdmap e424: 9 osds: 9 up, 9 in
            flags sortbitwise
      pgmap v150494: 270 pgs, 1 pools, 10942 kB data, 21 objects
            150 GB used, 189 GB / 339 GB avail
            3/45 objects degraded (6.667%)
            3/45 objects misplaced (6.667%)
                 183 active+clean
                  35 peering
                  27 active+remapped
                  25 remapped+peering

Nach ‘ner Weile ist der Cluster wieder im OK Status:

[root@box12 ~]# ceph -s
    cluster e4d48d99-6a00-4697-b0c5-4e9b3123e5a3
     health HEALTH_OK
     monmap e4: 3 mons at {box12=192.168.33.22:6789/0,box13=192.168.33.23:6789/0,box14=192.168.33.24:6789/0}
            election epoch 82, quorum 0,1,2 box12,box13,box14
     osdmap e424: 9 osds: 9 up, 9 in
            flags sortbitwise
      pgmap v150497: 270 pgs, 1 pools, 10942 kB data, 21 objects
            149 GB used, 189 GB / 339 GB avail
                 270 active+clean

Gucken wir uns nochmal die Verteilung der Objekte an:

[root@box12 ~]# ceph osd map rbd test
osdmap e424 pool 'rbd' (0) object 'test' -> pg 0.40e8aab5 (0.b5) -> up ([4,0], p4) acting ([4,0], p4)

Sieht besser aus.
Vielleicht nur ein Zufall. Wir stoppen OSD.0 auf box12. Die Daten sollten weiterhin jeweils zwischen beiden Racks repliziert werden:

[root@box12 ~]# systemctl stop ceph-osd@0
[root@box12 ~]# ceph osd tree
ID  WEIGHT  TYPE NAME                       UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -6 0.33110 datacenter dc03
 -1 0.33110     root datacenter01
 -5 0.33110         datacenter datacenter02
 -7 0.22073             rack rack1
 -4 0.11037                 host box14
  6 0.03679                     osd.6            up  1.00000          1.00000
  7 0.03679                     osd.7            up  1.00000          1.00000
  8 0.03679                     osd.8            up  1.00000          1.00000
 -3 0.11037                 host box13
  3 0.03679                     osd.3            up  1.00000          1.00000
  4 0.03679                     osd.4            up  1.00000          1.00000
  5 0.03679                     osd.5            up  1.00000          1.00000
 -8 0.11037             rack rack2
 -2 0.11037                 host box12
  0 0.03679                     osd.0          down        0          1.00000
  1 0.03679                     osd.1            up  1.00000          1.00000
  2 0.03679                     osd.2            up  1.00000          1.00000

Der Cluster verteilt wieder neu… Nur Geduld:

[root@box12 ~]# ceph osd map rbd test
osdmap e426 pool 'rbd' (0) object 'test' -> pg 0.40e8aab5 (0.b5) -> up ([4], p4) acting ([4], p4)
[root@box12 ~]# ceph -s
    cluster e4d48d99-6a00-4697-b0c5-4e9b3123e5a3
     health HEALTH_WARN
            96 pgs degraded
            31 pgs stuck unclean
            96 pgs undersized
            recovery 10/42 objects degraded (23.810%)
            1/9 in osds are down
     monmap e4: 3 mons at {box12=192.168.33.22:6789/0,box13=192.168.33.23:6789/0,box14=192.168.33.24:6789/0}
            election epoch 82, quorum 0,1,2 box12,box13,box14
     osdmap e426: 9 osds: 8 up, 9 in; 96 remapped pgs
            flags sortbitwise,require_jewel_osds
      pgmap v150626: 270 pgs, 1 pools, 10942 kB data, 21 objects
            149 GB used, 189 GB / 339 GB avail
            10/42 objects degraded (23.810%)
                 174 active+clean
                  96 active+undersized+degraded

Nach einer Weile:

[root@box12 ~]# ceph -s
    cluster e4d48d99-6a00-4697-b0c5-4e9b3123e5a3
     health HEALTH_OK
     monmap e4: 3 mons at {box12=192.168.33.22:6789/0,box13=192.168.33.23:6789/0,box14=192.168.33.24:6789/0}
            election epoch 82, quorum 0,1,2 box12,box13,box14
     osdmap e429: 9 osds: 8 up, 8 in
            flags sortbitwise,require_jewel_osds
      pgmap v150925: 270 pgs, 1 pools, 14071 kB data, 22 objects
            132 GB used, 168 GB / 301 GB avail
                 270 active+clean

Wir testen erneut:

[root@box12 ~]# ceph osd map rbd test
osdmap e429 pool 'rbd' (0) object 'test' -> pg 0.40e8aab5 (0.b5) -> up ([4,1], p4) acting ([4,1], p4)

Das Objekt liegt einmal in Rack1 und einmal in Rack2. Passt!
Noch nicht genug? Ihr habt Interesse noch mehr über Ceph zu erfahren? Dann besucht doch unsere Schulung: Ceph Schulung 😉
Weiterführendes: http://www.crss.ucsc.edu/media/papers/weil-sc06.pdf