1
0

Structure of discussion

This commit is contained in:
Benedikt Galbavy 2025-04-19 12:52:45 +02:00
parent 8df32aeb73
commit 8831e87a3c
5 changed files with 456 additions and 15 deletions

2
.gitignore vendored
View File

@ -7,3 +7,5 @@ webserver/base/shared/
*.pdf *.pdf
tex/out/ tex/out/
tex/build/

11
tex/.latexmkrc Normal file
View File

@ -0,0 +1,11 @@
$out_dir = 'build';
$pdflatex = 'pdflatex -synctex=1 -interaction=nonstopmode -shell-escape --output-directory=$out_dir %O %S';
$pdf_mode = 1;
$clean_ext = 'acn acr alg aux bbl blg fdb_latexmk fls glg glo gls idx ilg ind lof log lot out toc xdy';
$latexmk_postprocess = sub {
system("cp $out_dir/*.pdf .");
};

BIN
tex/img/fhtw_cover.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View File

@ -30,7 +30,12 @@
%% Definieren Sie hier weitere Literaturdatenbanken %% Definieren Sie hier weitere Literaturdatenbanken
%\addbibresource{Literaturdatenbank.bib} %\addbibresource{Literaturdatenbank.bib}
\usepackage{svg} \usepackage[newfloat]{minted}
\usepackage{caption}
\newenvironment{code}{\captionsetup{type=listing}}{}
\SetupFloatingEnvironment{listing}{name=Source Code}
% Die nachfolgenden Pakete stellen sonst nicht benötigte Features zur Verfügung % Die nachfolgenden Pakete stellen sonst nicht benötigte Features zur Verfügung
\usepackage{blindtext} \usepackage{blindtext}
@ -128,7 +133,7 @@ Secondary scenario if applicable
\chapter{Reproducibility} \chapter{Reproducibility}
Since the docker host system will also be tested, it also needs to be reproducible---to achieve that it will be instantiated as a virtual machine. Since the term host often has different meanings, especially in a context of containerization, this section will clarify the terms used for the rest of the thesis: Since the docker host system will also be tested, it also needs to be reproducible---to achieve that it will be instantiated as a virtual machine. Since the term host often has different meanings, especially in a context of containerization, this section will clarify the terms used for the rest of the thesis:
The device on which the VM is hosted will henceforth be called VM-host; the host of the Docker containers---the described VM---will be called docker host or just host. To allow reliable reproduction of attacks, these will also be made from a VM, which will be called the client-vm, or just client. If any further services are required, which would normally be external ``on the internet'', a third vm will be used, the ``external-vm''. The VM-host will only ever be used for configuring the VMs, never to test anything. The base configuration can be found in \ref{appendix_config}. The device on which the VM is hosted will henceforth be called VM-host; the host of the Docker containers---the described VM---will be called docker host, docker VM, or just host. To allow reliable reproduction of attacks, these will also be made from a VM, which will be called the client-vm, or just client. If any further services are required, which would normally be external ``on the internet'', a third vm will be used, the ``external-vm''. The VM-host will only ever be used for configuring the VMs, never to test anything. The base configuration can be found in \ref{appendix_base_config}.
\section{The Host of the Host} \section{The Host of the Host}
@ -142,7 +147,9 @@ Tools have been selected based on reproducibility and compatibility, but not per
\section{Tooling for the VM-Host} \section{Tooling for the VM-Host}
To evaluate the effectiveness of base configuration and the implemented measures, a series of controlled attacks are performed from the client VM against the running services in the docker host. The process is split into three phases, mirroring real world scenarios: To evaluate the effectiveness of base configuration and the implemented measures, a series of controlled attacks are performed from the client VM against the running services in the docker host. At first Ubuntu Desktop was considered as the OS, however as the client VM is not the focus of this thesis and thus does not need to be representative of the real world to the same degree as the docker VM, Kali Linux was determined to be a better option due to the suite of preinstalled tooling for the simulated attacks.
The process is split into three phases, mirroring real world scenarios:
Reconnaissance: Tools like nmap, netcat and curl are used to discover any open ports, services, and misconfigurations. Reconnaissance: Tools like nmap, netcat and curl are used to discover any open ports, services, and misconfigurations.
Exploitation: Metasploit and custom scripts are used to test the effectiveness of known exploits on a specific configuration. Due to the reproducibility of the environment, effectiveness can be measured and compared as a simple pass/fail rate. Exploitation: Metasploit and custom scripts are used to test the effectiveness of known exploits on a specific configuration. Due to the reproducibility of the environment, effectiveness can be measured and compared as a simple pass/fail rate.
Post-Exploitation: After gaining access, tools like linpeas and manual inspecting are used to determine access to shared resources. Post-Exploitation: After gaining access, tools like linpeas and manual inspecting are used to determine access to shared resources.
@ -156,19 +163,21 @@ Reconnaissance: Tools like nmap, netcat and curl are used to discover any open p
Exploitation: Metasploit and custom scripts are used to test the effectiveness of known exploits on a specific configuration. Due to the reproducibility of the environment, effectiveness can be measured and compared as a simple pass/fail rate. Exploitation: Metasploit and custom scripts are used to test the effectiveness of known exploits on a specific configuration. Due to the reproducibility of the environment, effectiveness can be measured and compared as a simple pass/fail rate.
Post-Exploitation: After gaining access, tools like linpeas and manual inspecting are used to determine access to shared resources. Post-Exploitation: After gaining access, tools like linpeas and manual inspecting are used to determine access to shared resources.
The goal in these tests is not to discover novel exploits, but to simulate real world attack paths and analyse the additional risk introduced by the hybrid architecture. The goal in these tests is not to discover novel exploits, but to simulate real world attack paths and analyse the additional risk introduced by the hybrid architecture. It should also be noted, that some tested measures only protect against a specific step, or assumes certain prerequisites---some steps will thus be skipped where applicable.
\chapter{The Holes in the Wall} \chapter{The Holes in the Wall}
This chapter describes the tests against the architecture. Each test starts with the configuration described in appendix B, with the corresponding changes as described in appendix C applied via patch\cite{patch1}. Assuming a complete configuration, the VMs are booted with vagrant\cite{hashicorp_vagrant}. This chapter describes the tests against the architecture. Each test starts with the configuration described in \ref{appendix_base_config}, with the corresponding changes as described in \ref{appendix_patches} applied via patch\cite{patch1}. Assuming a complete configuration, the VMs are booted with vagrant\cite{hashicorp_vagrant}.
\section{Security analysis---Use-Case: Web Services} \section{Security analysis---Use-Case: Web Services}
\subsection{Base Configuration} \subsection{Base Configuration}
The base configuration is a minimal configuration, using default values wherever possible. The base configuration is a minimal configuration, using default values wherever possible.
Reconnaissance
NMap Scan: \subsubsection*{Reconnaissance}
\paragraph*{NMap Scan}
Fig. 3: Output of \texttt{sudo nmap -sS -p1-65535 192.168.56.10} Fig. 3: Output of \texttt{sudo nmap -sS -p1-65535 192.168.56.10}
[In progress note: the log output will be attached in text format instead of as a screenshot in a later draft] [In progress note: the log output will be attached in text format instead of as a screenshot in a later draft]
@ -191,11 +200,37 @@ Detailed explanation of found consequences
[TODO: Gitea 1.17.2] [TODO: Gitea 1.17.2]
\subsection{Firewall on host system}
\subsection{Firewall in separate docker container}
\subsection{Firewall in NGinX container}
\subsection{Separate docker networks}
\chapter{Discussion - NAME PENDING}
Introduction/Summary
\section{Untested configurations}
Due to the wide array of possible configurations for any docker setup, is it virtually impossible to cover all in detail. Nonetheless this section will try to highlight some more common configurations, which were left out, and reason on why they were not tested. It is important to note that this list is by no means a complete list in any form.
\subsection{Alternatives to docker networks}
While it is common to expose specific ports for services---such as 3000 for NODE.js and thus Gitea, or variations on 8080 (8081, 8090, \textellipsis) for HTTP services---this approach is prone to cause port collisions. To avoid this, it is common to use a docker network \cite{a2024_networking} instead, especially as docker compose already defines the name of each service as its hostname. As docker networks are also a common security measure \cite{yasrab_2018_mitigating}, using hostnames not only improves convenience---both, in terms of setup and usage---but also security---thus testing configurations without using docker networks would not provide any meaningful results.
\subsection{Hardening of services}
The security of both services in the tested setup can be further improved by implementing the suggested hardening measures according to their documentations \cite{}\cite{}---some of which are implemented for other tests---testing security of the services in itself would however go past the scope of this thesis, as the selected services are merely a representation of a possible scenario.
\subsection{}
% Hier können Sie Ihre KI-Tools dokumentieren. Diese werden automatisch in eine Tabelle integriert. % Hier können Sie Ihre KI-Tools dokumentieren. Diese werden automatisch in eine Tabelle integriert.
\aitoolentry{My Brain}{Writing the thesis}{"Please write a thesis about [\textellipsis]" Entire Document} \aitoolentry{My Brain}{Writing the thesis}{``Please write a thesis about [\textellipsis]'' Entire Document}
% %
% Hier beginnen die Verzeichnisse. % Hier beginnen die Verzeichnisse
% %
\clearpage \clearpage
\printbibliography \printbibliography
@ -218,6 +253,7 @@ Detailed explanation of found consequences
\chapter*{\listacroname} \chapter*{\listacroname}
\begin{acronym}[XXXXX] \begin{acronym}[XXXXX]
\acro{VM}[VM]{virtual machine} \acro{VM}[VM]{virtual machine}
\acro{OS}[OS]{Operating System}
\acro{HTTP}[HTTP]{Hypertext Transfer Protocol} \acro{HTTP}[HTTP]{Hypertext Transfer Protocol}
\acro{HTTPS}[HTTPS]{Hypertext Transfer Protocol Secure} \acro{HTTPS}[HTTPS]{Hypertext Transfer Protocol Secure}
\acro{SSH}[SSH]{Secure Shell} \acro{SSH}[SSH]{Secure Shell}
@ -231,9 +267,401 @@ Detailed explanation of found consequences
% %
\clearpage \clearpage
\appendix \appendix
\chapter{Appendix A}\label{appendix_mermaid} %\chapter{Mermaid Source Code}\label{appendix_mermaid}
%\clearpage
\chapter{Source Code}\label{appendix_config}
\section{Base Configuration}\label{appendix_base_config}
\begin{code}
\captionof{listing}{Vagrantfile}
\label{code:Vagrantfile}
\begin{minted}{ruby}
Vagrant.configure("2") do |config|
BOX_NAME = "ubuntu/jammy64"
BOX_VERSION = "20241002.0.0"
DESKTOP_BOX_NAME = "kalilinux/rolling"
DESKTOP_BOX_VERSION = "2025.1.0"
config.vm.define "sandbox" do |sandbox|
sandbox.vm.box = BOX_NAME
sandbox.vm.box_version = BOX_VERSION
sandbox.vm.hostname = "sandbox.vm"
sandbox.vm.network "private_network", ip: "192.168.56.10"
sandbox.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
sandbox.vm.synced_folder ".", "/vagrant"
sandbox.vm.provision "ansible_local" do |ansible|
ansible.playbook = "/vagrant/sandbox/playbook.yml"
end
end
config.vm.define "client" do |client|
client.vm.box = DESKTOP_BOX_NAME
client.vm.box_version = DESKTOP_BOX_VERSION
client.vm.hostname = "client.vm"
client.vm.network "private_network", ip: "192.168.56.20"
client.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
end
client.vm.synced_folder ".", "/vagrant"
client.vm.provision "ansible_local" do |ansible|
ansible.playbook = "/vagrant/client/playbook.yml"
end
end
end
\end{minted}
\end{code}
\begin{code}
\captionof{listing}{sandbox/docker-compose.yml}
\label{code:sandbox:docker}
\begin{minted}{yaml}
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
networks:
- internal
environment:
DOMAIN: "https://bitwarden.vm.local"
DATABASE_URL: "postgres://vaultwarden:vaultwarden@vaultwarden-db/vaultwarden"
volumes:
- ./vw-data/:/data/
expose:
- 80
vaultwarden-db:
image: docker.io/library/postgres:latest
container_name: vaultwarden-db
restart: unless-stopped
environment:
POSTGRES_DB: vaultwarden
POSTGRES_USER: vaultwarden
POSTGRES_PASSWORD: vaultwarden
volumes:
- ./vw-postgres:/var/lib/postgresql/data
networks:
- internal
gitea:
image: docker.gitea.com/gitea:latest
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=gitea-db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
- GITEA__security__INSTALL_LOCK=true
restart: unless-stopped
networks:
- internal
volumes:
- ./gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
expose:
- 3000
- 22
gitea-db:
image: docker.io/library/postgres:latest
container_name: gitea-db
restart: unless-stopped
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
volumes:
- ./postgres:/var/lib/postgresql/data
networks:
- internal
nginx:
image: nginx:latest
container_name: nginx
restart: unless-stopped
networks:
- internal
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ./nginx/certs:/etc/nginx/certs
ports:
- 80:80
- 443:443
networks:
internal:
driver: bridge
\end{minted}
\end{code}
\begin{code}
\captionof{listing}{sandbox/playbook.yml}
\label{code:sandbox:ansible}
\begin{minted}{yaml}
---
- hosts: all
become: true
vars:
container_count: 1
default_container_name: docker
default_container_image: hello-world
default_container_command: sleep 1
tasks:
- name: Install required system packages
apt:
pkg:
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- virtualenv
state: latest
update_cache: true
- name: Copy nginx conf
copy:
src: /vagrant/sandbox/nginx.conf
dest: /home/vagrant/nginx.conf
- name: Copy docker compose
copy:
src: /vagrant/sandbox/docker-compose.yml
dest: /home/vagrant/docker-compose.yml
- name: Ensure certs directory exists
file:
path: /home/vagrant/nginx/certs
state: directory
mode: '0755'
- name: Install mkcert dependencies
apt:
pkg:
- libnss3-tools
- ca-certificates
state: present
update_cache: yes
- name: Download mkcert binary
get_url:
url: https://github.com/FiloSottile/mkcert/releases/latest/download/mkcert-v1.4.4-linux-amd64
dest: /usr/local/bin/mkcert
mode: '0755'
register: mkcert_download
- name: Ensure mkcert CAROOT directory exists
file:
path: /home/vagrant/.local/share/mkcert
state: directory
mode: '0755'
- name: Initialize mkcert CA
command: mkcert -install
environment:
XDG_DATA_HOME: /home/vagrant/.local/share
CAROOT: /home/vagrant/.local/share/mkcert
args:
creates: /home/vagrant/.local/share/mkcert/rootCA.pem
- name: Generate cert for gitea.vm.local
command: >
mkcert
-cert-file /home/vagrant/nginx/certs/gitea.vm.local.pem
-key-file /home/vagrant/nginx/certs/gitea.vm.local-key.pem
gitea.vm.local
args:
creates: /home/vagrant/nginx/certs/gitea.vm.local.pem
- name: Generate cert for bitwarden.vm.local
command: >
mkcert
-cert-file /home/vagrant/nginx/certs/bitwarden.vm.local.pem
-key-file /home/vagrant/nginx/certs/bitwarden.vm.local-key.pem
bitwarden.vm.local
args:
creates: /home/vagrant/nginx/certs/bitwarden.vm.local.pem
- name: Ensure export directory exists
file:
path: /vagrant/shared/ca
state: directory
mode: '0755'
- name: Copy mkcert rootCA.pem to shared directory
copy:
src: /home/vagrant/.local/share/mkcert/rootCA.pem
dest: /vagrant/shared/ca/rootCA.pem
remote_src: yes
- name: Add Docker GPG apt Key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb https://download.docker.com/linux/ubuntu focal stable
state: present
- name: Update apt and install docker-ce
apt:
pkg:
- docker-ce
- docker-compose-plugin
state: latest
update_cache: true
- name: Add 'vagrant' and 'git' users to docker group
user:
name: "{{ item }}"
groups: docker
append: yes
loop:
- vagrant
- git
- name: Create git user
user:
name: git
shell: /home/git/docker-shell
home: /home/git
create_home: yes
- name: Deploy docker passthrough shell
copy:
dest: /home/git/docker-shell
content: |
#!/bin/sh
exec /usr/bin/docker exec -i -u git --env SSH_ORIGINAL_COMMAND="$SSH_ORIGINAL_COMMAND" gitea sh "$@"
mode: '0755'
- name: Update SSH config for git user
blockinfile:
path: /etc/ssh/sshd_config
block: |
Match User git
AuthorizedKeysCommandUser git
AuthorizedKeysCommand /usr/bin/docker exec -i -u git gitea /usr/local/bin/gitea keys -c /data/gitea/conf/app.ini -e git -u %u -t %t -k %k
- name: Restart SSH
service:
name: ssh
state: restarted
- name: Ensure Docker service is running
service:
name: docker
state: started
enabled: true
- name: Run docker compose up -d
command: docker compose up -d
args:
chdir: /home/vagrant
\end{minted}
\end{code}
\begin{code}
\captionof{listing}{sandbox/nginx.conf}
\label{code:sandbox:nginx}
\begin{minted}{text}
server {
listen 443 ssl;
server_name gitea.vm.local;
ssl_certificate /etc/nginx/certs/gitea.vm.local.pem;
ssl_certificate_key /etc/nginx/certs/gitea.vm.local-key.pem;
location / {
proxy_pass http://gitea:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 443 ssl;
server_name bitwarden.vm.local;
ssl_certificate /etc/nginx/certs/bitwarden.vm.local.pem;
ssl_certificate_key /etc/nginx/certs/bitwarden.vm.local-key.pem;
location / {
proxy_pass http://vaultwarden:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
\end{minted}
\end{code}
\begin{code}
\captionof{listing}{client/playbook.yml}
\label{code:client:ansible}
\begin{minted}{yaml}
---
- hosts: all
become: true
vars:
container_count: 1
default_container_name: docker
default_container_image: hello-world
default_container_command: sleep 1
tasks:
# - name: Add Metasploit PPA
# apt_repository:
# repo: ppa:metasploit-official
# state: present
# update_cache: yes
- name: Install tools
apt:
pkg:
# - metasploit-framework
- curl
- nmap
- libnss3-tools
state: present
update_cache: yes
- name: Add sandbox hostnames to /etc/hosts
lineinfile:
path: /etc/hosts
line: "192.168.56.10 gitea.vm.local bitwarden.vm.local"
state: present
\end{minted}
\end{code}
\clearpage \clearpage
\chapter{Appendix B}\label{appendix_config}
\section{Configuration Modifications}\label{appendix_patches}
\clearpage \clearpage
\chapter{Appendix C}\label{appendix_patches}
\chapter{Test Results}\label{appendix_results}
\section{Command Outputs}\label{appendix_logs}
\end{document} \end{document}

View File

@ -691,10 +691,10 @@ urlcolor=TWblue, urlbordercolor=white}
}{} }{}
\Ifstr{\doctype}{}{% \Ifstr{\doctype}{}{%
\renewcommand*{\cover}{PICs/fhtw_cover.png}% \renewcommand*{\cover}{img/fhtw_cover.png}%
}{\Ifstr{\institution}{Technikum}% }{\Ifstr{\institution}{Technikum}%
{\renewcommand*{\cover}{PICs/fhtw_cover.png}}% {\renewcommand*{\cover}{img/fhtw_cover.png}}%
{\renewcommand*{\cover}{PICs/fhtw_cover.png}}} {\renewcommand*{\cover}{img/fhtw_cover.png}}}
\newcommand*{\@supervisor}{} \newcommand*{\@supervisor}{}
\newcommand*{\@supervisordesc}{} \newcommand*{\@supervisordesc}{}
\newcommand{\supervisor}[2][]{\gdef\@supervisordesc{#1}\gdef\@supervisor{#2}} \newcommand{\supervisor}[2][]{\gdef\@supervisordesc{#1}\gdef\@supervisor{#2}}