AUDITING A CLOUD PROVIDER’S COMPLIANCE WITH DATA BACKUP REQUIREMENTS: A GAME THEORETICAL ANALYSIS
AUDITING A CLOUD PROVIDER’S COMPLIANCE WITH
DATA BACKUP REQUIREMENTS: A GAME THEORETICAL ANALYSIS
INTRODUCTION
The new developments in cloud computing have introduced
significant security challenges to guarantee the confidentiality, integrity,
and availability of outsourced data. A service level agreement (SLA) is usually
signed between the cloud provider (CP) and the customer. For redundancy
purposes, it is important to verify the CP's compliance with data backup
requirements in the SLA.
There exist a number of security mechanisms to check the
integrity and availability of outsourced data. This task can be performed by
the customer or be delegated to an independent entity that we will refer to as
the verifier. However, checking the availability of data introduces extra
costs, which can discourage the customer of performing data verification too
often.
The interaction between the verifier and the CP can be
captured using game theory in order to find an optimal data verification
strategy. In this paper, we formulate this problem as a two player
non-cooperative game. We consider the case in which each type of data is replicated
a number of times, which can depend on a set of parameters including, among
others, its size and sensitivity.
We analyze the strategies of the CP and the verifier at
the Nash equilibrium and derive the expected behavior of both the players. Finally,
we validate our model numerically on a case study and explain how we evaluate
the parameters in the model.
The client may be interested in checking the availability of all data
backups using specific protocols such as proofs of data retrievability widely
studied in the literature. In these works, efforts have been made to design
solutions that meet various requirements such as low time complexity, stateless
verification, unbounded use of queries, and retrievability of data, etc.
In particular, several protocols allow public verifiability from a Third
Party Auditor (TPA), to which the client can delegate the verification task
through an Audit Level Agreement. This assumption is more realistic, since in
most cases, a lack of resources or expertise will prevent the client from
personally performing these verifications. In this paper, we will consider that
the TPA, which is an independent entity, will be the verifier of the client’s
data in the CP systems.
WORK RELATED WITH CLOUD PROVIDER’S COMPLIANCE
It is important to verify the cloud provider’s compliance with the
security requirements in the SLA. For example, Popa et al. designed a
proof-based system to enable security guarantees in an SLA. In recent years, a
significant amount of data integrity schemes were proposed by different
researchers, and have been gradually adapted to specific use cases such as
outsourced databases and cloud computing, for which works focusing on public verifiability
issues, such as, were noticeably helpful and allowed clients to delegate the
verification process to third parties.
Among these schemes, the two main directions explored by researchers
include the Provable Data Possession (PDP) for ensuring possession of data, and
the Proof of Retrievability (POR) for data possession and retrievability. The
main idea of PDP is that a data owner generates some metadata information for a
data file to be used later for verification purposes.
Many extensions of this scheme managed to decrease the communication cost
and complexity, as well as to allow dynamic operations on data such as insertion,
modification, or deletion. Moreover, and proposed PDP schemes specific to cloud
computing. In the cloud domain, game theory has emerged in recent years as an
important tool to analyze the interactions between multiple players with the
same or conflicting interests.
It has been used to study a number of problems including resource
allocation and management and cloud service negotiation, while some research
papers addressed the problem of cloud security. To address cloud integrity
issues, the authors in proposed a model in which a client checks the
correctness of calculations made on the data by the CP. In, Nix et al. study
the case of querying one cloud provider, since checking data at multiple CPs is
prohibitively expensive. And focused on checking whether the queries sent to
the CP are being computed correctly, under the condition that the stored data
is intact
NUMERICAL
An increase in the cost of executing the verification
query CsS will have no significant impact on the pattern of change of the TPA’s
NE strategy w.r.t. to S. However, the stable values of q⇤ for large values of S change. In particular,
they increase for q⇤ 0 and q⇤ 1 and decrease for q⇤ 2. The TPA increases the frequency of checking
one copy instead of two copies, since checking either an honest CP or a CP that
passes the verification test will entail a higher cost CsS for the TPA. Impact
of Ct. The TPA’s strategy at the NE is independent of Ct . In Fig. 2a, as with
greater values of Cs, a similar change is observed in the CP’s NE strategy when
increasing the cost of processing the verification query for the TPA Ct S.
However, in this case, the CP’s strategy changes more quickly w.r.t. S. Impact
of ✏. In Fig. 2b, when ✏ increases, the TPA’s NE strategy rate of change
decreases. For large values of S, we notice an increase of the stable values for
q⇤ 0 and q⇤ 1 and a decrease for q⇤ 2. The TPA’s NE reflects his belief that the CP
will more likely behave honestly given the increased incentive given to him
when behaving as such. However, this incentive is given to the CP when the TPA
fails to detect a malicious act by the CP and therefore, it does not completely
prevent such scenario. Impact of F. In Fig. 2c, when F increases, the rate of
change of the CP’s NE strategy decreases.
Numerical
Example
We consider the case where N = 3 data items are outsourced. The
characteristics of data D1, D2, D3 are defined as in Table I.
First, we compute the value of Sm for each data Dm. From, we know that the
storage cost for a CP can be estimated between about 100 picocent/bit and 300
picocent/bit per year (1 picocent = 1014 $). In this case study, we consider a
time period of one year, and an average storage value of 200 picocent/bit.
Therefore, we find S1 = 0.000016 $, S2 = 0.08 $, and S3 = 3.2 $.
For the evaluation of Cs and Ct , the verification scheme we implemented
is based on the open source proof of retrievability project by Zachary
Peterson, and corresponds to the basic POR scheme from, where the number of
checked sentinel blocks represent b = 1% of the data file size. We used a Linux
Virtual Machine running on a laptop with an i7 Intel Core processor, with 2.3
GHz clock frequency, and 8 GB of RAM.
For the biggest data D3, we measured tTPA = 1.51s and tCP = 0.27s, the
difference being due to the fact that there is no specific processing on the CP
side besides giving the correct sentinel blocks in this scheme. Based on the
values in, the CP cycle cost can be estimated at 2 picocent/cycle, while the
TPA cycle cost should rather be around 20 picocent/cycle.
Therefore CsS3 = 1.24.105 $, and Ct S3 = 7.0.104 $, which gives us Cs =
3.88.106 and Ct = 2.19.104. Since the primary objective consists of finding the
optimal checking strategy, we focus in this section on the behavior of the TPA.
However, if we multiply the size of data D2 by 10 (therefore S2 = 0.8 $), we
notice that the TPA will spend twice as much resources to check the existence
of at least one backup copy of D2.
In this case, for data D2, the TPA anticipates that the reward will be
less effective in preventing the CP from acting dishonestly. It is worth
mentioning that the difference between the values F of the data items and the
other parameters in this case study is substantial, which could raise concerns
about the influence of the values of F over the other parameters. However, if
we multiply the size of data D3 by 10, bringing S3 to 32 $, which is not
negligible compared to F3, we do not notice a big difference in the TPA’s NE
strategy w.r.t. the results in Table2
CONCLUSION
We analyzed the problem of verifying data availability in the case of data
outsourced to a cloud provider. We formulated the problem between the CP and
the TPA as a non-cooperative game. The TPA’s objective is to detect any
deviation from the agreement signed between the CP and the client by checking
the existence of the required number of backup copies of each type of data on
the CP’s servers.
On the other hand, the CP’s objective is to increase the storage capacity
on his servers, which translates in practice in the existence of a number of
copies less than the required number included in the contract with the client.
We performed an indepth analysis of multiple extensions of the simple model in
taking into account the existence of multiple backup copies of each data. In
each proposed extension, we identified the optimal verification strategy for
the TPA.
Finally, we validated our analytical results on a case study. One of the
interesting results that we found relates to the stackelberg game in which we
have a leader (the TPA) and a follower (the CP) in the game. This type of games
reflects realistic scenarios that we can encounter in real life.
The model presented in this paper can be adapted to verify the existence
of the required number of backup copies in specific geographical locations as
is sometimes specified in an SLA. As future work, we plan to investigate the
case where interactions between the CP and the TPA can occur on multiple
occasions over time.
This type of interactions is particularly interesting if we consider a
repeated game setting where we have a number of TPAs, on behalf of multiple
clients, verifying the CP’s compliance with the signed agreements with the
clients. In this case, the result of the interactions between the CP and a client
is not limited to that particular client, but extends to impact the behavior of
all the other players in the game.
For example, we can study how the discovery of an improper act by the CP
can affect his reputation and therefore his future payoffs, as clients will be
more inclined to change provider. In this case, players’ behaviors may change
after it has been made public that a CP breached his agreement with a client.
Therefore, each short-term gain of the CP must be weighted against the enduring
long-term impact on his reputation, which automatically affects his future
profits. The public exposure of the behavior of the CP is an important
dimension that needs to be taken into account, which can play a decisive role
of deterrence to force the CP to fully respect the backup agreements signed
with the clients.



Comments
Post a Comment