FC
FC
FC
Kartei Details
| Karten | 72 | 
|---|---|
| Sprache | English | 
| Kategorie | Informatik | 
| Stufe | Universität | 
| Erstellt / Aktualisiert | 28.11.2020 / 17.07.2021 | 
| Weblink | 
                                
                                
                                https://card2brain.ch/box/20201128_dbt
                             | 
| Einbinden | 
                                
                                
                                <iframe src="https://card2brain.ch/box/20201128_dbt/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>
                             | 
Lernkarteien erstellen oder kopieren
Mit einem Upgrade kannst du unlimitiert Lernkarteien erstellen oder kopieren und viele Zusatzfunktionen mehr nutzen.
Melde dich an, um alle Karten zu sehen.
Cloud integration tests vs. Fog Integration tests
- Cloud integration tests - Moch services, data, devices 
- Evaluate corner-cases which usually should not exist in production 
 
- Fog Integration tests - Much more difficult because of physical infrastructure 
- (partial) solution: virtualize & emulate fog environment in the cloud 
 
What is live testing?
Examples?
- Test new software version in production 
- Monitor what happens - While rolling out an update gradually 
- While directing part of the traffic to old and/or new version 
 
- Example - Blue/Green Deployments - Deploy new version to blue environment 
- Smoke tests against the blue system 
- Switch traffic from green to blue 
- Switch back to green on errors 
 
- Canary releasing - Rollout of a new version only to a subset of production servers 
- Easy to revert 
- Use it for A/B testing 
- Check capacity requirements by incrementally increasing the load 
 
- Dark/Shadow Launches - Functionality is deployed in a production env without being visible or activated 
- Production traffic is duplicated and routed to the shadow version as well 
- Observing the shadow version without imparting the user 
 
 
Live testing in Fog environment?
What are alternative solutions for the edge part?
- Cloud part: ok 
- Difficult on edge devices which - May not have the capacity to run two versions in parallel 
- May have safety requirements which make canary releases impossible 
 
- Find a separate solution for the edge part - Mock edge devices in the cloud 
- Have a physical testbed 
 
Deploying cloud applications vs. Deploying fog applications
- Deploying cloud applications - Changes are pushed to devices via IaC 
- New virtual devices are created, configured and deployed with new version - Old instances are disconnected/terminated 
 
 
- Deploying fog applications - Edge devices often need to be physically connected at least once for deploying the first version 
- Use an app store-like approach - Update is sent to central software repository 
- Deployed application frequently checks for updates and self-updates if necessary => pull approach 
 
- Plan with incompatibilities and different version on devices 
- Used versioned interfaces 
 
What is benchmarking?
What is a benchmarking tool?
- Benchmarking is a way to systematically study the quality of cloud services based on experiments 
- Benchmarking tool creates an artificial load on the SUT, while carefully tracking detailed quality metrics 
What are the benchmarking design objectives?
- Relevance - Benchmark the important parts 
- Mimic real-world use 
 
- Repeatability - Maximize determinism in the benchmark 
 
- Fairness - Treat all SUTs the same 
 
- Portability - Avoid assumptions about the SUT 
- Make the benchmark broadly applicable 
 
- Understandability - Have an intuitive benchmark specification 
 
What are fog-specific benchmarking challenges?
- Geo-distribution of experiments 
- Deployment of benchmarking clients for edge-based SUTs 
- Distributed measurements of QoS - E2E latency in an IoT data processing pipeline 
 
- Multi-workload scenarios - Event-driven at the edge 
- OLAP and OLTP in the cloud 
 
- Complex analysis and results 
What are the benchmarking implementation objectives?
- Correctness - Assert adherence of implementation to specification 
 
- Distribution - Build the benchmarking tool for distributed deployments 
- Keep coordination pre-benchmark run 
- Consider clock synchronization 
 
- Fine-grained logging - Never discard information if not absolutely necessary 
 
- Reproducibility - Use repeatable benchmarks 
- Repeat often 
- Run sufficiently long 
- Document setting 
 
- Portability - Use adapter design 
- Consider extensibility and evolvement 
- Avoid assumptions on the SUT 
 
- Ease of use - Document everything 
- Provide instructions 
- Release code 
 
Platforms & Applications / Basic Design Principals
State-of-the-Art: Cloud systems?
- Microservice-based design 
- Infrastructure automation 
- Fault-tolerance through replication 
- Cluster-based deployment only in a few datacenters - Fog: single-node to cluster sized deployments on millions of sites 
 
Platforms & Applications / Basic Design Principals
Geo-awareness in the cloud vs. Geo-awareness in fog
- Geo-awareness in the cloud - Limited to large regions 
- High latency if the closest data center is quite far 
- Introduction Fog nodes - Fast connection to nearby fog nodes but limited bandwidth to cloud 
- Access points of mobile devices must be adapted based on their location 
 
 
- Geo-awareness - Infrastructure needs to expose location and network topology explicit 
 
Platforms & Applications / Basic Design Principals
Fault tolerance for cloud applications?
Fault tolerance in fog applications?
- Fault tolerance in cloud applications - Redundant servers 
- Retry-on-error principle (with other service instances) 
- Monitor services and their workload, auto-scaling 
- Chaos-Monkey randomly shuts down services to check if the system adapts and catches outage 
 
- Fault tolerance in fog applications - The prevalence of faults depends on the number of nodes - Systems and/or their components fail continuously 
- Connection infrastructure fails or operates with reduces quality - Power outage 
- Some devices transmit data under certain conditions (sunlight) 
- Eventual consistency problems may result in stale datasets 
 
- Buffer messages until its receiver is available again 
- Expect data staleness and ordering issues 
- Cache data aggressively 
- Compress data items as much as possible on unreliable connections 
- Plan with incompatibility, constantly monitor software versions on devices 
- Design for loose coupling 
 
 
Platforms & Applications / Basic Design Principals
Geo-awareness in fog applications: What requirements?
- Must be aware of its deployment location 
- Needs to handle client movement (handover to other edge devices) 
- Must be prepared to move components elsewhere (stateless application logic) 
- Must move data when necessary 
- May not rely on the availability of remote components 
Case Studies / DeFog
- Motivation - Application can be deployed in different ways 
- Various hardware options exist on the edge 
- How can we compare them? 
 
- Deployment options - Three deployment modes - Cloud only 
- Edge only 
- Cloud-Edge (Fog) 
 
- Docker as deployment vehicle 
 
- Approach - Use a set of representative benchmark applications 
- Measure E2E performance as well low-level metrics 
- 6 applications - Latency critical 
- Bandwidth intensive 
- Location aware 
- Compute intensive 
 
 
Case Studies / BeFaaS
- Benchmarking fog-based FaaS platforms 
- Federated deployments - Different cloud provider 
 
- Workloads - E-commerce application 
- IoT application 
 
Case Studies / MockFog
- How evaluate a fog application? - Without testing infrastructure - Guesses, small local testbeds, and simulation 
 
- Operate additional edge machines - Expensive, must be at same sites as production machines 
 
- Idea: Us an emulated fog infrastructure testbed that is set up in the cloud - Size/Power of Vms: Cloud instance types and Docker resource limits 
- Network characteristics: tc, iptables, etc. 
 
- MockFog 
 
- Three modules - Infrastructure emulation 
- Application management 
- Experiment orchestration - Compromises a finite set of states 
- Failure testing 
 
 
- Node Manager and Node Agents 
Cloud Computing Characteristics (NIST)
- On-Demand Self-Service 
- Broad Network Access 
- Resource Pooling 
- Rapid Elasticity 
- Measured Service 
Why is cloud computing not enough?
- Requires continuous connectivity 
- Too high latency 
- Bandwidth limitations 
- Regulations / privacy requirements 
What is the edge?
Outskirt of an administrative domain
What is Fog Computing?
What does it provide?
- Extension of the cloud model - applications can reside on multiple layers of a networks' topology 
 
- Combining cloud resources with edge devices and potential intermediary nodes in the network 
- Provides the ability to analyze data near the edge for - improving efficiency or 
- to operate while disconnected from a larger network 
 
- Cloud service can be used for tasks that require mode resources or elasticity 
Fog Computing Characteristics
- Runs required computations near the end-user 
- Uses lower latency storage at or near the edge 
- Uses low latency communication 
- Implements elements of management 
- Uses Cloud for strategic tasks 
- Multi-tenancy on a massive scale is required for some use cases 
- Geo Distributed - Physical location is significant 
- A dynamic pool of sites => unreliable connections between sites 
- Sites may be resource-constrained 
 
In which areas does Fog Computing benefit?
- Data Collection, Analytics & Privacy 
- Security - Moving security closed to the edge => higher performance security applications 
 
- Compliance Requirements - Geofencing, data sovereignty, copyright enforcement 
 
- Real-Time 
Challenges to Adoption of Fog Computing (inherent)
- General - They result from the very idea of using fog resources 
- Technical constraints - limits of computational power 
 
- Logical constraints - tradeoffs in distributed systems 
 
- Market constraints - there are currently no managed edge services 
 
 
- No Edge Services 
- Lack of Standardized Hardware 
- Management Effort 
- Managing QoS - IoT or autonomous cars have stronger quality requirements 
- More problems (network latency/partitioning, message loss/reordering) in non-centralized systems 
 
- No Network Transparency 
Challenges to Adoption of Fog Computing (external)
- General - The result from external entities 
- Government agencies 
- Attackers 
 
- Physical Security - E.g attaching hardware on top of street light pole instead of eye level 
- Protection against fire and vandalism 
 
- Legal and Regulatory Requirements - Data needs to be held in a certain physical location (eHealth) 
- Liquid Fog-based applications might have trouble fulfilling certain aspects of privacy regulations 
 
Synchronous Communication
- Example: phone call, method call in Java 
- Requires both parties to be on-line 
- The caller must wait and both, server and client need to be alive 
- Disadvantages - Higher probability of failures 
- Difficult to identify and react to failures 
- The one-to-one system is not practical for complex interactions 
 
- Finding out when the failure took place is not easy 
Asynchronous Communication
- Clients can do other things when they are waiting 
- Examples: Email, JavaScript callbacks 
Types of decoupling
- Space (Location) 
- Time 
- Technology 
- Data Format 
Messaging patterns
- Request / Response (1 to 1) 
- Load Balancing (1 to many) 
- Fan-out / Fan-in (1 to many / many to 1) 
- Broadcasting (many to many) 
- Pub/Sub (many to many, but structured) 
What is Pub/Sub Messaging?
- Clients can act as Publisher or Subscriber or both 
- Communication is many-to-many 
Pub/Sub: Matching of Events and Subscriptions
- Channel-based (low level of expressiveness) 
- Topic-based 
- Content-based (high level of expressiveness) 
Pub/Sub: Broker vs. P2P
- The broker handles client communication centrally 
- P2P clients have to route messages themselves 
- Broker-based setups are a good fit for fog 
- 
                
- 1 / 72
- 
                
