Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2e test k8s #1

Closed
wants to merge 6 commits into from
Closed

E2e test k8s #1

wants to merge 6 commits into from

Conversation

jinja2
Copy link
Owner

@jinja2 jinja2 commented Dec 15, 2023

Description:

Link to tracking Issue:

Testing:

Documentation:

@jinja2 jinja2 marked this pull request as draft December 15, 2023 18:35
@jinja2 jinja2 force-pushed the e2e-test-k8s branch 10 times, most recently from e8ab7e7 to 972b10f Compare December 18, 2023 07:06
crobert-1 and others added 5 commits December 18, 2023 15:10
…telemetry#28651)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This fixes inconsistency introduced with the creation of this package.
In open-telemetry#25096 @cparkins was added as a code owner in the
[metadata.yaml](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/azure/metadata.yaml)
but not the top level `CODEOWNERS` file.

Co-authored-by: Alex Boten <[email protected]>
…telemetry#28651)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This fixes inconsistency introduced with the creation of this package.
In open-telemetry#25096 @cparkins was added as a code owner in the
[metadata.yaml](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/azure/metadata.yaml)
but not the top level `CODEOWNERS` file.

Co-authored-by: Alex Boten <[email protected]>
@jinja2 jinja2 closed this Jan 31, 2024
@jinja2 jinja2 deleted the e2e-test-k8s branch January 31, 2024 04:32
jinja2 pushed a commit that referenced this pull request Jul 9, 2024
**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This PR implements the new container logs parser as it was proposed at
open-telemetry#31959.

**Link to tracking Issue:** <Issue number if applicable>
open-telemetry#31959

**Testing:** <Describe what testing was performed and which tests were
added.>

Added unit tests. Providing manual testing steps as well:

### How to test this manually

1. Using the following config file:
```yaml
receivers:
  filelog:
    start_at: end
    include_file_name: false
    include_file_path: true
    include:
    - /var/log/pods/*/*/*.log
    operators:
      - id: container-parser
        type: container
        output: m1
      - type: move
        id: m1
        from: attributes.k8s.pod.name
        to: attributes.val
      - id: some
        type: add
        field: attributes.key2.key_in
        value: val2

exporters:
  debug:
    verbosity: detailed

service:
  pipelines:
    logs:
      receivers: [filelog]
      exporters: [debug]
      processors: []
```
2. Start the collector:
`./bin/otelcontribcol_linux_amd64 --config
~/otelcol/container_parser/config.yaml`
3. Use the following bash script to create some logs:
```bash
#! /bin/bash

echo '2024-04-13T07:59:37.505201169-05:00 stdout P This is a very very long crio line th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '{"log":"INFO: log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log
echo '2024-04-13T07:59:37.505201169-05:00 stdout F at is awesome! crio is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '2021-06-22T10:27:25.813799277Z stdout P some containerd log th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log
echo '{"log":"INFO: another log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log
echo '2021-06-22T10:27:25.813799277Z stdout F at is super awesome! Containerd is awesome' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log



echo '2024-04-13T07:59:37.505201169-05:00 stdout F standalone crio line which is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '2021-06-22T10:27:25.813799277Z stdout F standalone containerd line that is super awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log
```
4. Run the above as a bash script to verify any parallel processing.
Verify that the output is correct.


### Test manually on k8s

1. `make docker-otelcontribcol && docker tag otelcontribcol
otelcontribcol-dev:0.0.1 && kind load docker-image
otelcontribcol-dev:0.0.1`
2. Install using the following helm values file:
```yaml
mode: daemonset
presets:
  logsCollection:
    enabled: true

image:
  repository: otelcontribcol-dev
  tag: "0.0.1"
  pullPolicy: IfNotPresent

command:
  name: otelcontribcol

config:
  exporters:
    debug:
      verbosity: detailed
  receivers:
    filelog:
      start_at: end
      include_file_name: false
      include_file_path: true
      exclude:
        - /var/log/pods/default_daemonset-opentelemetry-collector*_*/opentelemetry-collector/*.log
      include:
        - /var/log/pods/*/*/*.log
      operators:
        - id: container-parser
          type: container
          output: some
        - id: some
          type: add
          field: attributes.key2.key_in
          value: val2


  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [batch]
        exporters: [debug]
```
3. Check collector's output to verify the logs are parsed properly:
```console
2024-05-10T07:52:02.307Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2}
2024-05-10T07:52:02.307Z	info	ResourceLog #0
Resource SchemaURL: 
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-05-10 07:52:02.046236071 +0000 UTC
Timestamp: 2024-05-10 07:52:01.92533954 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 07:52:01)
Attributes:
     -> log: Map({"iostream":"stdout"})
     -> time: Str(2024-05-10T07:52:01.92533954Z)
     -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}})
     -> logtag: Str(F)
     -> key2: Map({"key_in":"val2"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log)
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-05-10 07:52:02.046411602 +0000 UTC
Timestamp: 2024-05-10 07:52:02.027386192 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 07:52:02)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log)
     -> time: Str(2024-05-10T07:52:02.027386192Z)
     -> log: Map({"iostream":"stdout"})
     -> logtag: Str(F)
     -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}})
     -> key2: Map({"key_in":"val2"})
Trace ID: 
Span ID: 
Flags: 0
...
```


**Documentation:** <Describe the documentation added.>  Added

Signed-off-by: ChrsMark <[email protected]>
jinja2 pushed a commit that referenced this pull request Jul 9, 2024
…try#33225)

**Description:** <Describe what has changed.>
Using the DB span example below, X-Ray exporter failed to generate the
expected DB call subsegment names because it could not parse JDBC
connection strings that start with the `jdbc:` prefix.
```
Span #1
    Trace ID       : 663a0b68a5e3849c09c07f914b3df738
    Parent ID      : 1052e2a4a2516884
    ID             : 374de78b552e23c2
    Name           : orders@no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com
    Kind           : Client
    Start time     : 2024-05-07 11:07:20.62 +0000 UTC
    End time       : 2024-05-07 11:07:20.624 +0000 UTC
    Status code    : Unset
    Status message :
Attributes:
     -> db.connection_string: Str(jdbc:mysql://no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com:3306)
     -> db.name: Str(orders)
     -> db.system: Str(MySQL)
     -> db.user: Str([email protected])
```

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>
local tests
jinja2 pushed a commit that referenced this pull request Jul 9, 2024
…pen-telemetry#33353)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
Container parser should add k8s metadata as resource attributes and not
as log record attributes.

**Link to tracking Issue:** <Issue number if applicable> Fixes
open-telemetry#33341

**Testing:** <Describe what testing was performed and which tests were
added.>
Manual testing on local k8s cluster:

```console
2024-06-04T06:40:08.219Z	info	ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> k8s.pod.uid: Str(d5ecc924-e255-4525-b5be-6437939b1e4d)
     -> k8s.container.name: Str(busybox)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.name: Str(daemonset-logs-dhzcq)
     -> k8s.container.restart_count: Str(0)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-06-04 06:40:08.007370503 +0000 UTC
Timestamp: 2024-06-04 06:40:07.855932421 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 06:40:07)
Attributes:
     -> logtag: Str(F)
     -> key2: Map({"key_in":"val2"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log)
     -> time: Str(2024-06-04T06:40:07.855932421Z)
     -> log.iostream: Str(stdout)
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-06-04 06:40:08.007451031 +0000 UTC
Timestamp: 2024-06-04 06:40:07.957875321 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 06:40:07)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log)
     -> log.iostream: Str(stdout)
     -> time: Str(2024-06-04T06:40:07.957875321Z)
     -> key2: Map({"key_in":"val2"})
     -> logtag: Str(F)
Trace ID: 
Span ID: 
Flags: 0
```

**Documentation:** <Describe the documentation added.> ~

---------

Signed-off-by: ChrsMark <[email protected]>
jinja2 pushed a commit that referenced this pull request Jul 9, 2024
…try.log_response_body` config (open-telemetry#33854)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
- Add `telemetry.log_request_body` and `telemetry.log_response_body`
config for debugging. Debug log will contain field `request_body` and/or
`response_body` in the same log line instead of separate lines to avoid
interleaved log lines.
- Change "Request failed" log level to debug.

Output:
```
2024-07-02T14:09:24.983+0100	debug	elasticsearchexporter/elasticsearch_bulk.go:67	Request roundtrip completed.	{"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "response_body": "{\"version\":{\"number\":\"1.2.3\"}}\n", "path": "/", "method": "GET", "duration": 0.000865486, "status": "200 OK"}
2024-07-02T14:09:24.984+0100	debug	elasticsearchexporter/elasticsearch_bulk.go:67	Request roundtrip completed.	{"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "request_body": "{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_1\"},\"Body\":\"Load Generator Counter #0\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_2\"},\"Body\":\"Load Generator Counter #1\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n", "response_body": "{\"took\":0,\"errors\":false,\"items\":[{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}},{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}}]}\n", "path": "/_bulk", "method": "POST", "duration": 0.000539979, "status": "200 OK"}
```

Required config to log
```
exporters:
  elasticsearch:
    telemetry:
      log_request_body: true
      log_response_body: true
    
service:
  telemetry:
    logs:
      level: debug
```

For easier analysis, limit the size of request body size. Use
`num_workers`=1 and lower `flush.bytes` and/or `flush.interval`.

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>

Manually verified with a modified integration test.

**Documentation:** <Describe the documentation added.>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants