Skip to content

Commit 5613bc8

Browse files
committed
fix(README): resolving a merge conflict
Signed-off-by: Cryptophobia <aouzounov@gmail.com>
2 parents 5930851 + 523ef79 commit 5613bc8

File tree

11 files changed

+93
-86
lines changed

11 files changed

+93
-86
lines changed

README.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,10 @@
44
[![Build Status](https://travis-ci.org/teamhephy/controller.svg?branch=master)](https://travis-ci.org/teamhephy/controller)
55
[![codecov.io](https://codecov.io/github/deis/controller/coverage.svg?branch=master)](https://codecov.io/github/deis/controller?branch=master)
66
[![Docker Repository on Quay](https://quay.io/repository/deisci/controller/status "Docker Repository on Quay")](https://quay.io/repository/deisci/controller)
7-
[![Dependency Status](https://www.versioneye.com/user/projects/5863f1de6f4bf900128fa95a/badge.svg?style=flat)](https://www.versioneye.com/user/projects/5863f1de6f4bf900128fa95a)
87

98
Deis (pronounced DAY-iss) Workflow is an open source Platform as a Service (PaaS) that adds a developer-friendly layer to any [Kubernetes](http://kubernetes.io) cluster, making it easy to deploy and manage applications on your own servers.
109

11-
For more information about the Deis Workflow, please visit the main project page at https://github.com/deisthree/workflow.
10+
For more information about the Deis Workflow, please visit the main project page at https://github.com/teamhephy/workflow.
1211

1312
We welcome your input! If you have feedback, please [submit an issue][issues]. If you'd like to participate in development, please read the "Development" section below and [submit a pull request][prs].
1413

@@ -47,7 +46,7 @@ You'll want to test your code changes interactively in a working Kubernetes clus
4746

4847
### Workflow Installation
4948

50-
After you have a working Kubernetes cluster, you're ready to [install Workflow](https://deis.com/docs/workflow/installing-workflow/).
49+
After you have a working Kubernetes cluster, you're ready to [install Workflow](https://docs.teamhephy.com/installing-workflow/).
5150

5251
## Testing Your Code
5352

@@ -77,8 +76,8 @@ kubectl get pod --namespace=deis -w | grep deis-controller
7776
```
7877

7978
[install-k8s]: https://kubernetes.io/docs/setup/pick-right-solution
80-
[issues]: https://github.com/deisthree/controller/issues
81-
[prs]: https://github.com/deisthree/controller/pulls
82-
[workflow]: https://github.com/deisthree/workflow
79+
[issues]: https://github.com/teamhephy/controller/issues
80+
[prs]: https://github.com/teamhephy/controller/pulls
81+
[workflow]: https://github.com/teamhephy/workflow
8382
[Docker]: https://www.docker.com/
84-
[v2.18]: https://github.com/deisthree/workflow/releases/tag/v2.18.0
83+
[v2.18]: https://github.com/teamhephy/workflow/releases/tag/v2.21.4

charts/controller/templates/controller-deployment.yaml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -134,6 +134,10 @@ spec:
134134
secretKeyRef:
135135
name: database-creds
136136
key: password
137+
{{- if (.Values.deis_ignore_scheduling_failure) }}
138+
- name: DEIS_IGNORE_SCHEDULING_FAILURE
139+
value: "{{ .Values.deis_ignore_scheduling_failure }}"
140+
{{- end }}
137141
- name: RESERVED_NAMES
138142
value: "deis, deis-builder, deis-workflow-manager, grafana"
139143
- name: WORKFLOW_NAMESPACE

rootfs/Dockerfile

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM quay.io/deis/base:v0.3.6
1+
FROM hephy/base:v0.4.1
22

33
RUN adduser --system \
44
--shell /bin/bash \
@@ -17,6 +17,7 @@ RUN buildDeps='gcc libffi-dev libpq-dev libldap2-dev libsasl2-dev python3-dev py
1717
libpq5 \
1818
libldap-2.4 \
1919
python3-minimal \
20+
python3-distutils \
2021
# cryptography package needs pkg_resources
2122
python3-pkg-resources && \
2223
ln -s /usr/bin/python3 /usr/bin/python && \

rootfs/Dockerfile.test

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM quay.io/deis/base:v0.3.6
1+
FROM hephy/base:v0.4.1
22

33
RUN adduser --system \
44
--shell /bin/bash \
@@ -49,7 +49,7 @@ RUN buildDeps='gcc libffi-dev libpq-dev libldap2-dev libsasl2-dev python3-dev py
4949
WORKDIR /app
5050

5151
# test-unit additions to the main Dockerfile
52-
ENV PGBIN=/usr/lib/postgresql/9.5/bin PGDATA=/var/lib/postgresql/data
52+
ENV PGBIN=/usr/lib/postgresql/10/bin PGDATA=/var/lib/postgresql/data
5353
RUN apt-get update && \
5454
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
5555
git \

rootfs/api/settings/production.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,12 @@
218218
'filters': ['require_debug_true'],
219219
'propagate': True,
220220
},
221+
'django_auth_ldap': {
222+
'handlers': ['console'],
223+
'level': 'DEBUG',
224+
'filters': ['require_debug_true'],
225+
'propagate': False,
226+
},
221227
'api': {
222228
'handlers': ['console'],
223229
'propagate': True,

rootfs/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Deis controller requirements
22
backoff==1.4.3
3-
django==1.11.23
3+
django==1.11.29
44
django-auth-ldap==1.2.15
55
django-cors-middleware==1.3.1
66
django-guardian==1.4.9

rootfs/scheduler/resources/deployment.py

Lines changed: 38 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,11 @@ def create(self, namespace, name, image, entrypoint, command, spec_annotations,
139139
return response
140140

141141
def update(self, namespace, name, image, entrypoint, command, spec_annotations, **kwargs):
142+
# Set the replicas value to the current replicas of the deployment.
143+
# This avoids resetting the replicas which causes disruptions during the deployment.
144+
deployment = self.deployment.get(namespace, name).json()
145+
current_replicas = int(deployment['spec']['replicas'])
146+
kwargs['replicas'] = current_replicas
142147
manifest = self.manifest(namespace, name, image,
143148
entrypoint, command, spec_annotations, **kwargs)
144149

@@ -277,9 +282,9 @@ def are_replicas_ready(self, namespace, name):
277282

278283
if (
279284
'unavailableReplicas' in status or
280-
('replicas' not in status or status['replicas'] is not desired) or
281-
('updatedReplicas' not in status or status['updatedReplicas'] is not desired) or
282-
('availableReplicas' not in status or status['availableReplicas'] is not desired)
285+
('replicas' not in status or status['replicas'] != desired) or
286+
('updatedReplicas' not in status or status['updatedReplicas'] != desired) or
287+
('availableReplicas' not in status or status['availableReplicas'] != desired)
283288
):
284289
return False, pods
285290

@@ -380,22 +385,36 @@ def _check_for_failed_events(self, namespace, labels):
380385
Request for new ReplicaSet of Deployment and search for failed events involved by that RS
381386
Raises: KubeException when RS have events with FailedCreate reason
382387
"""
383-
response = self.rs.get(namespace, labels=labels)
384-
data = response.json()
385-
fields = {
386-
'involvedObject.kind': 'ReplicaSet',
387-
'involvedObject.name': data['items'][0]['metadata']['name'],
388-
'involvedObject.namespace': namespace,
389-
'involvedObject.uid': data['items'][0]['metadata']['uid'],
390-
}
391-
events_list = self.ns.events(namespace, fields=fields).json()
392-
events = events_list.get('items', [])
393-
if events is not None and len(events) != 0:
394-
for event in events:
395-
if event['reason'] == 'FailedCreate':
396-
log = self._get_formatted_messages(events)
397-
self.log(namespace, log)
398-
raise KubeException(log)
388+
max_retries = 3
389+
retry_sleep_sec = 3.0
390+
for try_ in range(max_retries):
391+
response = self.rs.get(namespace, labels=labels)
392+
data = response.json()
393+
try:
394+
fields = {
395+
'involvedObject.kind': 'ReplicaSet',
396+
'involvedObject.name': data['items'][0]['metadata']['name'],
397+
'involvedObject.namespace': namespace,
398+
'involvedObject.uid': data['items'][0]['metadata']['uid'],
399+
}
400+
except Exception as e:
401+
if try_ + 1 < max_retries:
402+
self.log(namespace,
403+
"Got an empty ReplicaSet list. Trying one more time. {}".format(
404+
json.dumps(labels)))
405+
time.sleep(retry_sleep_sec)
406+
continue
407+
self.log(namespace, "Did not find the ReplicaSet for {}".format(
408+
json.dumps(labels)), "WARN")
409+
raise e
410+
events_list = self.ns.events(namespace, fields=fields).json()
411+
events = events_list.get('items', [])
412+
if events is not None and len(events) != 0:
413+
for event in events:
414+
if event['reason'] == 'FailedCreate':
415+
log = self._get_formatted_messages(events)
416+
self.log(namespace, log)
417+
raise KubeException(log)
399418

400419
@staticmethod
401420
def _get_formatted_messages(events):

rootfs/scheduler/resources/pod.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -552,7 +552,7 @@ def events(self, pod):
552552
if not events:
553553
events = []
554554
# make sure that events are sorted
555-
events.sort(key=lambda x: x['lastTimestamp'])
555+
events.sort(key=lambda x: x['lastTimestamp'] or '')
556556
return events
557557

558558
def _handle_pod_errors(self, pod, reason, message):
@@ -577,9 +577,11 @@ def _handle_pod_errors(self, pod, reason, message):
577577
"ErrImageNeverPull": "ErrImageNeverPullPolicy",
578578
# Not including this one for now as the message is not useful
579579
# "BackOff": "BackOffPullImage",
580-
# FailedScheduling relates limits
581-
"FailedScheduling": "FailedScheduling",
582580
}
581+
# We want to be able to ignore pod scheduling errors as they might be temporary
582+
if not os.environ.get("DEIS_IGNORE_SCHEDULING_FAILURE", False):
583+
# FailedScheduling relates limits
584+
event_errors["FailedScheduling"] = "FailedScheduling"
583585

584586
# Nicer error than from the event
585587
# Often this gets to ImageBullBackOff before we can introspect tho

rootfs/scheduler/tests/test_deployments.py

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -102,12 +102,12 @@ def test_deployment_api_version_1_9_and_up(self):
102102
deployment.version = mock.MagicMock(return_value=parse(canonical))
103103
actual = deployment.api_version
104104
self.assertEqual(
105+
expected,
106+
actual,
107+
"{} breaks - expected {}, got {}".format(
108+
canonical,
105109
expected,
106-
actual,
107-
"{} breaks - expected {}, got {}".format(
108-
canonical,
109-
expected,
110-
actual))
110+
actual))
111111

112112
def test_deployment_api_version_1_8_and_lower(self):
113113
cases = ['1.8', '1.7', '1.6', '1.5', '1.4', '1.3', '1.2']
@@ -120,12 +120,12 @@ def test_deployment_api_version_1_8_and_lower(self):
120120
deployment.version = mock.MagicMock(return_value=parse(canonical))
121121
actual = deployment.api_version
122122
self.assertEqual(
123+
expected,
124+
actual,
125+
"{} breaks - expected {}, got {}".format(
126+
canonical,
123127
expected,
124-
actual,
125-
"{} breaks - expected {}, got {}".format(
126-
canonical,
127-
expected,
128-
actual))
128+
actual))
129129

130130
def test_create_failure(self):
131131
with self.assertRaises(
@@ -158,11 +158,13 @@ def test_update(self):
158158
deployment = self.scheduler.deployment.get(self.namespace, name).json()
159159
self.assertEqual(deployment['spec']['replicas'], 4, deployment)
160160

161-
# emulate scale without calling scale
162-
self.update(self.namespace, name, replicas=2)
161+
# update the version
162+
new_version = 'v1024'
163+
self.update(self.namespace, name, version=new_version)
163164

164165
deployment = self.scheduler.deployment.get(self.namespace, name).json()
165-
self.assertEqual(deployment['spec']['replicas'], 2, deployment)
166+
self.assertEqual(deployment['spec']['template']['metadata']['labels']['version'],
167+
new_version, deployment)
166168

167169
def test_delete_failure(self):
168170
# test failure

rootfs/scheduler/tests/test_horizontalpodautoscaler.py

Lines changed: 7 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -62,27 +62,14 @@ def update(self, namespace=None, name=generate_random_name(), **kwargs):
6262
self.assertEqual(horizontalpodautoscaler.status_code, 200, horizontalpodautoscaler.json()) # noqa
6363
return name
6464

65-
def update_deployment(self, namespace=None, name=generate_random_name(), **kwargs):
65+
def scale_deployment(self, namespace=None, name=generate_random_name(), replicas=1):
6666
"""
67-
Helper function to update and verify a deployment on the namespace
67+
Helper function to scale the replicas of a deployment
6868
"""
6969
namespace = self.namespace if namespace is None else namespace
70-
# these are all required even if it is kwargs...
71-
d_kwargs = {
72-
'app_type': kwargs.get('app_type', 'web'),
73-
'version': kwargs.get('version', 'v99'),
74-
'replicas': kwargs.get('replicas', 4),
75-
'pod_termination_grace_period_seconds': 2,
76-
'image': 'quay.io/fake/image',
77-
'entrypoint': 'sh',
78-
'command': 'start',
79-
'spec_annotations': kwargs.get('spec_annotations', {}),
80-
}
81-
82-
deployment = self.scheduler.deployment.update(namespace, name, **d_kwargs)
83-
data = deployment.json()
84-
self.assertEqual(deployment.status_code, 200, data)
85-
return name
70+
self.scheduler.deployment.scale(namespace, name, image=None,
71+
entrypoint=None, command=None,
72+
replicas=replicas)
8673

8774
def test_create_failure(self):
8875
with self.assertRaises(
@@ -147,7 +134,7 @@ def test_update(self):
147134
self.assertEqual(deployment['status']['availableReplicas'], 3)
148135

149136
# scale deployment to 1 (should go back to 3)
150-
self.update_deployment(self.namespace, name, replicas=1)
137+
self.scale_deployment(self.namespace, name, replicas=1)
151138

152139
# check the deployment object
153140
deployment = self.scheduler.deployment.get(self.namespace, name).json()
@@ -158,7 +145,7 @@ def test_update(self):
158145
self.assertEqual(deployment['status']['availableReplicas'], 3)
159146

160147
# scale deployment to 6 (should go back to 4)
161-
self.update_deployment(self.namespace, name, replicas=6)
148+
self.scale_deployment(self.namespace, name, replicas=6)
162149

163150
# check the deployment object
164151
deployment = self.scheduler.deployment.get(self.namespace, name).json()

0 commit comments

Comments
 (0)